Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:I have my doubts (Score 1) 220

Ethically loaded? How? I don't see how the brain would be suffering? Or are they worried about skynet?

Here in meatspace, putting a human in a sensory deprivation tank is torture and one of the more surefire ways to drive a person insane. The brain isn't wired to believe in null sensory data. If a region of the brain stops receiving stimulation, it frantically strengthens its connections to other regions, tapping randomly into its neighbors and interpreting their arbitrary stimulation as sense data (compare the hallucinations of sensory deprivation to phantom limb syndrome and somatosensory remapping in amputees, e.g. touching an amputee's face triggering sensation on the amputee's phantom fingers because the face and fingers are next to each other in the sensory map of the postcentral gyrus). If this frantic effort fails and the brain regions can't find a source of stimulus, they start to die outright, nerve by nerve, because nerves are wired to commit suicide if they don't fire regularly.

But even a sensory deprivation tank still provides senses of sound, proprioception, temperature, gravity. If we were to create an accurate nerve-by-nerve simulation of an entire vertebrate brain and then provide it with no sensory input.... Well, at best the result would be phantom body syndrome. At worst, it could go well beyond torture and become the greatest suffering ever inflicted on a single sentient being. And we don't understand the brain well enough to know which pieces of sensory data must be provided to maintain sanity and prevent existence from being torture.

And that's not even getting into the legal and moral issues. Let's say we can put a sane mind on a chip by cloning the synaptic structure of a recently deceased human and feeding it all the appropriate sensory inputs. Is it a person? Is it a citizen? Can it consent to a contract? Is it a minor until the hardware turns 18? May I enslave it? Is it ethical to feed it false sensory data culled from a virtual reality simulation, i.e. trap it in The Matrix? If I turn it off and erase it at the end of my scientific study, have I murdered it? If yes, am I legally obligated to keep it powered until the hardware fails? Am I morally obligated to transfer the synapse data to new hardware before the old hardware fails, making the uploaded human immortal? Alternatively, am I morally prohibited from doing that for more than N years, for some value of N? If I'm transferring the mind to new hardware, and my mistake causes a power surge that erases its synapses, am I protected by existing Good Samaritan laws, or have I committed involuntary manslaughter? As it's a simulated human mind, complete with all human appetites, am I obligated to provide it with pornography and the means to masturbate itself to orgasm? Do I have to obtain consent from the human whose deceased mind will be used to create the chip? Am I obligated to pick an atheist, or more to the point a subject who doesn't believe in souls? How well informed does the consenting subject have to be before dying? Does the family have to consent as well, or are we content with provoking another HeLa controversy for the greater good? And so on...

Comment Re:Not gonna happen. (Score 1) 904

The wear and tear on the body is such that even if you can increase the lifespan to a theoretical 150 years you wouldnt be very healthy for the last 90 or so years. You also need something that adresses the wear on the body. Our hearts arent made for 150 years of use and we build up various plaques and toxins in our bodies as time goes by. Even if we all lived under controlled and ideal circumstances the last seven decades would be pretty much seven decades of being eighty.

Actually, there's some research that strongly suggests that there's only a finite amount of aging going on. What's happening in aging might not be "the body's self repair process falls behind entropy", as commonly thought. Instead, aging would be "the same tradeoffs which favor reproductive success in youth exact a cost later in life"; after some finite time, you've paid those costs in full and aging stops, leaving only a constant risk of disability and death per year instead of the ever-growing one postulated by the "falling behind on entropy" model. In this view, there are still some specific things that actually do wear out with age because they aren't constantly replaced (tooth decay and cornea clouding / cataracts are the obvious ones), but general health doesn't suffer the same fate.

See New Scientist's The end of ageing: Why life begins at 90 (behind a paywall, sadly), which references a demographic study where annual mortality rates became constant above age 93 (Greenwood and Irwin, Human Biology, 1939), a study confirming the same pattern in fruit fly populations (Carey and Curtsinger, Science vol. 258 p. 457 and p. 461, 1992), and an exploration of a mathematical model of mutation which concluded that a mortality plateau is inevitable, not a mere special case (Rose and Mueller; PNAS vol. 93 pp. 15249-15253, 1996). (Of note: Rose is the author of the New Scientist article, with all the confirmation bias that implies.)

Also, the research into aging suggests there are only a handful systemic problems that actually cause it (accumulation of crosslinked proteins; declining telomerase production causing cells to stop dividing; etc.), and if those systemic problems were addressed we could largely arrest the aging process. Aubrey de Gray's TED talk is pretty much mandatory viewing on that front.

It's worth keeping in mind that if metabolism and entropy inevitably led to cell death after 100 years, then human beings as a species would have already died out: sperm and egg cells are metabolically active cells that contain DNA that's millions of years old, and there's no time machine that allows a pristine copy of the germline DNA to be copied forward from conception to adulthood without at least a childhood's worth of accumulated error. Likewise for our mitochondria, pseudo-cells that they are, with their own mtDNA separate from the DNA of the nucleus, exposed to the entropic ravages of the Krebs cycle firsthand without a nuclear membrane to protect it; our bodies pass these pseudo-cells on from mother to child unchanged, without even giving their mtDNA a de-methylation/re-methylation spring cleaning like mammalian nuclear DNA receives. But they thrive in the germ cell line, generation after generation, even as they suffer and decline in the somatic cell lines. There must be a difference in upkeep, some cost that evolution is willing to pay for the germline but unwilling for the somatic lines, that allows the germline mitochondria to remain healthy and "young" for millions of years.

Comment Re:For IPv4? (Score 1) 151

I realize that IPv4 is going to be with us for quite some time, but is this going to be worth the effort? It requires a bit of jiggery-pokery to repoint your DNS, the kind of thing that appeals to the Slashdot crowd but which your grandma will never, ever pull off. ISPs could help, but will they do so before IPv6 makes it irrelevant?

It's described in IPv4 terms, but extending it to work with IPv6 addresses should be simple enough. The trickiest part will be finding the golden CIDR mask to replace IPv4 /24. Giving up /64 is too much, since it identifies most ISP customers uniquely, and /48 has similar issues. Probably something near /32 or /40 would be appropriate, although you could probably do a lot with as little as /20.

Other than that, the described technique is still fully relevant because IPv6 doesn't change the game in any other way: DNS still works the same way, HTTP still works the same way, and websites are still slow for the same reasons, so you have the same incentives for regional caching and the same choices in how to do it.

Comment Re:Does anyone (Score 1) 127

Does anyone else see this as a giant security hole? As in, various schemes like this have been tried since the days of ActiveX, and the only reason ActiveX has the worst reputation is because it's the only one that gained widespread use?

The point of NaCl is that it's a virtual machine bytecode language, and you can statically verify (without running the code) that the bytecode conforms to the spec. However, for performance reasons, the bytecode language and the virtual machine architecture just happen to line up with the native machine code and native architecture. NaCl provides only a subset of the full instruction set, though, and this prevents arbitrary pointer arithmetic or self-modifying code that could break outside the sandbox. NaCl authors actually need to recompile their code to x86 NaCl or ARM NaCl as a distinct GCC compiler target, instead of plain old x86 or ARM, because the NaCl targets are easily distinguishable from the native ones when you examine the machine code bytes. (The most important feature: all jumps are aligned and no ops cross an alignment boundary, so there's only one possible machine code interpretation for each byte of NaCl code.)

Needless to say, this is a vastly different model compared to ActiveX, which was "we'll trust any old native code to make arbitrary system calls, just so long as there's an RSA signature attached". NaCl ditches the central trusted authority model that Microsoft preferred, and instead goes with the Java/JavaScript/Lua/LISP model of "you can only perform side effects that the interpreter chooses to expose to your code". As with the interpreted languages your NaCl code is Turing-complete, so you can waste CPU and RAM until the cows come home, but you can't actually touch the filesystem, create GUI elements, or modify the address space of other processes unless Chrome decides to permit it. The only difference is that you don't run at some fraction of native code speed, but exactly at native code speed, and you can statically optimize as much or as little as you like, or write in any language you want (so long as someone's written a NaCl target for your language's compiler).

There will probably be a few bugs in the static verification logic that allow not-quite-NaCl code to slip through, but this is no worse than the sandboxing problems we already face from Javascript in the browser. With JavaScript, this has even included double free bugs that allowed overwriting arbitrary memory with native code and executing it. The risks with NaCl are no different.

Comment Re:Dark side? (Score 1) 196

The bright side is that the people who innovated to make the patents are being compensated for their efforts. This is how patents motivate people to innovate. Would you prefer if Google could use other people's innovations without compensating them?

If anything, patents in the software industry cause innovation NOT by rewarding the company that holds the patent so that they will feel inclined to invent more, but by encouraging companies to patent the lowest-hanging fruit and forcing everyone else to invent workarounds as the patent owner lords over the market by charging exorbitant prices. Think of gzip, PNG, Vorbis, Tarkin, and to a lesser extent VP3/Theora and VP8/WebM, which were all developed in response to the patents on LZW and MPEG 1 through 4 because the licensing terms were greater than the market would bear.

Patents give their owners monopoly power, which ipso facto means that the licensing fees charged by the owner will never be at the free market price created by the intersection of supply and demand. Even the MPEG-LA consortium, which actually goes to the effort of trying to invent a "fair" price, doesn't have enough information to actually determine what the fair price would actually be in the absence of a monopoly, e.g. what the MPEG algorithms would sell for if they were a contractually-protected trade secret bought and sold on the open market (the scenario that patents were created to prevent).

Comment Re:Deceleration (Score 1) 133

where the vector "something" is often "velocity"

Just to nit-pick, you mean "the direction of movement". Velocity also implies the magnitude as well as the direction, and I don't see why we need to bring magnitude into the argument.

No, I meant what I said. The noun phrase "<vector X> in the opposite direction of <vector Y>" makes sense for any vectors X and Y, even though it doesn't define a relationship between their magnitudes or otherwise mention them.

Comment Re:Deceleration (Score 1) 133

In physics, "deceleration" is just an informal shorthand way of saying "acceleration in the opposite direction of something", where the vector "something" is often "velocity" by default but can be anything else depending on context. Saying "Pioneer is decelerating" is not quite right, then: the Pioneer craft are traveling on hyperbolic paths that slingshot away from the Sun on a curve, not zipping away in straight lines, so an acceleration toward the Sun would not point in the opposite direction from the velocity. It would slow them down since the velocity-acceleration angle is obtuse, but not as much as an actual 180 degree acceleration would. (Perhaps the acceleration is Sun-ward instead of backward because the Pioneer craft aligned their spins to keep their radio dishes pointed toward Earth, and asymmetry makes them emit more RTG heat on the opposite side from the dishes? Pure speculation on my part.)

Comment Kurzweil: $AMAZING_TECH by $RIDICULOUS_DATE (Score 1) 186

Can we start marking Kurzweil articles as dupes?

Granted, this is a little less ridiculous than some of his past claims — machine translation has improved a lot in the O(decade) since Babelfish — but translation algorithms are still context-blind for the foreseeable future because no one's yet found a computationally feasible shortcut for the "every Bayesian probability is dependent on every other Bayesian probability" case that natural language seems to teeter in the direction of. Moore's law isn't going to fix it, either, because it's not a polynomial-time problem that can be fixed by just throwing faster clocks or more cores at the problem. We've gotten as far as we have by using dumber, polynomial-time algorithms and just throwing supertankers of training data at the problem, but in the end it's no more contextual than Dissociated Press.

Incidentally, it's clear that natural language translation is not actually as difficult as computing the maximum likelihood of a fully cross-connected Bayes network (i.e. superpolynomial-time), or else the human brain itself would be stumped. But we don't know enough about which shortcuts are useful for convincing human brains versus which shortcuts result in "the vodka is good but the meat is rotten" translations. That means we're stuck theorizing from our armchairs, throwing algorithmic crap at the wall and seeing what sticks, or maybe poking at brains with pointy sticks and fMRIs. On this matter, date predictions are worthless. The breakthrough could come tomorrow or hundreds of years from now, and Kurzweil is no better equipped to predict the date than my cat is.

Comment Did they even ask? (Score 1) 71

One interesting element of these findings is that the achievements that are highly correlated – or part of the same clique – do not necessarily have any obvious connection. For example, an achievement dealing with a character’s prowess in unarmed combat is highly correlated to the achievement badge associated with world travel – even though there is no clear link between the two badges to the outside observer.

Really, no clear link? Did they even ask one player? These are both low-hanging fruit for the solo completionist. In particular, I suspect that north of 90% of players with the 400 unarmed weapon skill achievement will have World Explorer, although the relationship will be lower in the reverse direction — the former is a bit more of a time investment, and much more boring and tedious (Blizzard removed weapon skills for a reason), whereas World Explorer is something that can be knocked out by an hour-a-day casual player in two weeks with no problem. Since World Explorer can easily be teamed up with book collecting, critter /love-ing, the zone and continent quest completions, and Loremaster, I suspect those all form a single clique of solo completionist achievements, with some sub-cliques that are a bit more accessible to the casual player.

Comment Re:Software / Firmware (Score 3, Interesting) 119

Why is it important that linux drivers have source available but we don't worry so much about seeing the firmware source? Should we be pushing to see firmware source too? Instead should it not matter about seeing driver source? I'd love to hear your perspectives.

Device A has an open source driver, proprietary guts, and a firmware blob loaded by the driver on boot.

Device B has an open source driver, proprietary guts, and a firmware blob hidden in an immutable ROM on the device that you don't know about.

For some reason, Debian scorns Device A and praises Device B, even if the firmware blob for Device A allows unlimited redistribution. For the most part I like Debian, but that policy is just silly: Device A is the one that has the greater potential for end-user hackability.

Comment Re:it's been said (Score 1) 278

Wait, nevermind. I seem to be confused about something, Wikipedia says the transcendentals are uncountable.

In another universe, perhaps they are ...

For the rest of us, Taylor series are the best oculus I can think of into the known transcendentals (Pi, e, sin(a/b), etc.). However, most transcendentals will remain obscure by virtue of being irrelevant.

Even in this universe, there are uncountably many uncomputable transcendentals. The canonical example of an uncomputable real is Chaitin's constant, which is the probability that a randomly chosen computer program (in some specified language) will halt. We can figure out the first few digits, but beyond that it's seemingly impossible to calculate. If you had a way to iteratively generate the digits in Chaitin's constant, you could solve the halting problem, and vice versa.

My great-grandparent post was under the seemingly mistaken impression that "transcendental" referred to a distinct subset of the reals, something like "the reals with a well-defined Taylor series that are not algebraic", but instead (per Wikipedia) it appears to be "the reals that are not algebraic", which is a vastly larger set.

Comment Re:it's been said (Score 1) 278

Wait, nevermind. I seem to be confused about something, Wikipedia says the transcendentals are uncountable. There's probably an argument for why Turing Machines can't be used to uniquely identify arbitrary transcendentals, e.g. there aren't enough Turing Machines to go around (Cantor argument), or you can't prove that a given Turing Machine identifies the transcendental you're seeking (Halting problem), or something.

I'm still fairly convinced that pi is a computable transcendental, and probably likewise for most of the transcendentals that we've given names to, but I now suspect that the majority of the transcendentals don't fall in that category. Oops.

Slashdot Top Deals

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...