Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Not Quite a Resounding Success (Score 1) 73

If you're so clever, show us your system which does this. Oh, wait, you don't have one, do you?

Actually, I do. It's called my arms.

I really wish people would stop using "brain controlled" for "brain plus millions of dollars of specialized machinery to replace your arms controlled". Saying something is "brain controlled" tells us nothing--it's like calling heavier-than-air flight "massive flight", or fixed-wing aircraft "aerofoil flight". The terminology does nothing to differentiate one thing from another.

While this may seem like a trivially pedantic cavil, it has been my experience that terminology that differentiates on the basis of non-essentials very often ends up misleading laypeople. There is already a robust mythology of disembodied brains as viable objects of philosophic consideration (really) this kind of sloppy language is at the very least not helping.

So can we please start calling these "arms free controllers" or similar, and acknowledge that there is always a brain involved? We're replacing the interface, not introducing a brain. It's like calling a touch-screen machine a "CPU controlled computer" because it lacks a keyboard.

Comment Re:Ai is inevitable (Score 1) 339

it is not. It's a fixed real thing that exists.

Which has nothing at all to do with computability.

We are not Turing machines. This is obvious. Turing machines don't have I/O. Turing machines don't have sensors or effectors. We do.

We can and do interact with the world in ways that Turing machines do not, and those interactions are a fundamental aspect of our intelligence.

This means that we can compute things that Turing machines can't. If we coupled a Turing machine to senors and effectors (that is, built a robot) it would have the potential to be as intelligent as we are, but it would no longer be a Turing machine and would be able to reach conclusions about non-computable problems, just as we can.

Turing computability is one very, very limited aspect of intelligence. Interaction with the world is at least as important.

Comment Re:What the f*$# is wrong with us? (Score 5, Insightful) 1198

But throwing one group under the bus to stand up for another still results in just as many people getting hit by the bus.

The thing that all these finger-wagging missives fail to take into account is that masculinity, like femininity, is a social construct. There are underlying biological differences between the male and female populations, but there are also broad distributions of individual characteristics, and the gender binary model attempts to impose a crisp, discontinuous division between "masculine" and "feminine".

In doing so, it does violence to anyone who fails to fit very well with the nominal masculine or feminine ideals of the society they happen to find themselves in.

The feminist movement has done a reasonably good job, more-or-less, in pointing out how these forces operate to shape women's lives.

We have done a lousy job of appreciating that the same kinds of forces shape men's lives as well, so we get these ridiculous claims that individual men are creatures of perfect agency, utterly unaffected by the social forces that are attempting to bludgeon them into good little emotionless soldiers (or whatever your society's favoured model of masculinity is at the moment). Telling profoundly damaged, struggling individuals to "stop whining" and so on is the opposite of what they need. They need to be told: "I feel you pain, but I hate your behaviour..."

The utter lack of compassion for men, and the complete lack of awareness of how the social construction of masculinity affects them, is one of the most depressing things about the current discourse on these issues.

None of this excuses individuals who behave badly, but if we want men to get better, we have to stop failing them as completely and systematically as we are now. We have to start valuing their lives, their experiences, their reality, rather than simply hitting them harder with various real and rhetorical hammers when they refuse to fit into the socially constructed masculine role that has been prepared for them.

Comment Re:Errors (Score 4, Insightful) 230

The slightly surprising part is that the misclassified images seem so close to those in the training set.

With emphasis on "slightly". This is a nice piece of work, particularly because it is constructive--it both demonstrates the phenomenon and gives us some idea of how to replicate it. But there is nothing very surprising about demonstrating "non-linear classifiers behave non-linearly."

Everyone who has worked with neural networks has been aware of this from the beginning, and in a way this result is almost a relief: it demonstrates for the first time a phenomenon that most of us were suspicious would be lurking in there somewhere.

The really interesting question is: how dense are the blind spots relative to the correct classification volume? And how big are they? If the blind spots are small and scattered then this will have little practical effect on computer vision (as opposed to image processing) because a simple continuity-of-classification criterion will smooth over them.

Comment Re:Pretty stupid reasoning (Score 1) 405

I could easily charge 8â"10&euro per standard page, so a 200-page novel could easily reach 2000â. And that's just proofreading! Editing would cost much more.

Proofing my novel (344 pages) would have cost $1800 in Canada through a reputable service. I've talked to editors who charge around $1500 and up turning your book into something publishable. That's somewhat less than you're suggesting.

I agree many people are ill-equipped to deal with the costs and skills required for independent publishing... but those same people are also incapable of getting published traditionally.

The distribution looks like this:

|A|aaaaaaaaaa|bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb....

where:

A = traditionally published authors
a = people with the skills/resources to be successful indie authors
b = everyone else

Obviously b >> a, but equally obviously, a >> A.

When focusing only on the undoubted fact that b is a huge population of incompetents, it is important not to lose sight of the equally certain fact that there are many competent, inspired, creative people in the "a" population who for various reasons can't get a leg up in the traditional publishing world.

The question is, "If we have to, are we willing to ditch A for a?"

Personally, I am. I don't think the world would be a poorer place on net if the Charles Stosses of the world had to go back to ditch digging while ten times their number became successful indie author/publishers.

Comment Re:Amazon is short-sighted (Score 1) 405

And out of that 70%, the writer now has to supply their own editors, artwork, proof readers and layout specialists.

Or they need to learn to do it themselves, most of which is not too difficult.

My recent book (DRM free on Amazon: http://www.amazon.com/Darwins-...) cost me $200 to produce (assuming my time is worth nothing), which went to pay for the cover art.

If I had wanted to pay an editor it would have cost around $1500--I looked into this, but decided to play with a combination of early-reader feedback (I have a number of friends whom I trust to tell me when things are crap, and believe me, they did), mechanical editing based on research-grade natural-language processing tools, and semi-automated proofreading (which I wrote my own code for using a variety of heuristics tailored to the kinds of errors I'm particularly prone to making.)

I'm sure there are typos and minor grammar issues remaining, but at a level that is not materially worse than many professionally edited books. And this was a first-pass at this method. I'm sure that with more work the process of editing and proof-reading can be much more highly automated, although nothing short of a full AI will be able to replace first readers for basic feedback.

As to design and layout, anyone with a reasonable level of HTML, CSS and LaTeX experience should be able to produce a decent-looking ebook or print book. There are tricks, but it's not rocket science.

We are in the early days of indie publishing, and things are only going to get better as we automate more processes and lower barriers between authors and readers. There is nothing today stopping a writer from producing a professional-quality book with minimal resources, and that's a good thing.

Comment Re:Alternative Summary (Score 1) 405

Exactly. Stoss is, after all, the author of, "A Score is not an Album", or something equally limited in it's perspective. He apparently believes that there is no indie music, because musicians are not capable of doing all the things that labels do.

While I'm perfectly willing to believe that Charles Stoss is incapable of doing all the things that are required to make an indie author successful--hire or otherwise collaborate with a decent, professional editor, learn or hire out for design and production, pursue marketing opportunities, etc--the claim that no one has those capabilities is obviously absurd.

There will be plenty of books to read in the glorious future... it just won't be Charles Stoss writing them. While that sucks for him, it isn't clear that his product is of a sufficiently novel and irreplaceable kind that anyone will miss him more than they will appreciate the indie authors who would otherwise be shut out by the traditional publishing industry.

He fails particularly on the claim that this process will end by "leaving just Amazon as a monopoly distribution channel retailing the output of an atomized cloud of highly vulnerable self-employed piece-workers like myself."

There was this clever guy who once pointed out the "contradictions in capitalism" who suggested that situations like this were unstable. There was another guy--possibly more clever, as he didn't think world-wide revolution was required to resolve them--who even coined a term for what happens when such an unsatisfactory situation arises: "creative destruction".

Amazon will no-doubt try to build and maintain a monopoly, but already there is an indie ecosystem that is reacting to that and working to create alternative quality and delivery systems. It is still in its infancy, but Amazon is going to lose in the end, because they can't control the delivery process. At worst they will become the iTunes of books: influential, but hardly all-powerful.

Comment Re:Amazon provides a service (Score 1) 218

Amazon sells books. It does not write them or publish them.

This is not strictly true, at least with respect to publishing. Amazon owns CreateSpace, which is a publisher. As such, it is in direct competition with other publishers, or soon will be.

CreateSpace is currently aimed at the indie/print-on-demand market (for example: https://www.createspace.com/47...) but Amazon has expressed an interest in branching out into mainstream publication.

As such, it is positioned to dominate the publishing and distribution vertical completely, and people are worried about this, for good or ill. This story is less about what Amazon is doing today than what they might do tomorrow.

Personally, as an independent author/publisher I'm not too worried: the more restrictive Amazon becomes the more they set up the conditions of their own demise, because they have no way of effectively erecting barriers to entry in the publishing business, particularly in e-books.

No one will be able to make any money at it, but for authors writing hasn't been about money for decades, so this won't change anything except the viability of traditional publishing.

Comment Re:You bet they are "quietly optimistic".. (Score 2) 80

In NM, as in Oz, a lot of fires start as brushfires -- no wood, no particular heat retention -- stop it even briefly and it doesn't get into the forests.

Reading between the lines of TFA this appears an extremely optimistic reading of the research, which so far involves blowing a flame off a propane source, and appears in the long term to be directed at separating flame from tree-tops, with the idea that this will slow down the rate of spread, not put the fire out. It will give emergency services more time to respond by quenching the fast-moving tree-top phase of the fire.

The problem is that there will still be glowing coals on the woody stems, even if they are just brush (unless it is a pure grass fire, which is pretty rare, and even then there are usually bushes with woody stems involved.) Those coals will have the potential to re-ignite the fire, although the goal of slowing things down may be achieved, and it may well be enough to both save lives and to allow firefighters to get things under control.

This is an interference measure, not an extinguisher.

Comment Re:Times sure are changing (Score 5, Insightful) 147

"Messing with life", as you call it, has an incredible potential for doing harm if approached carelessly. It doesn't take much imagination to realize this, either: synthetic infectious agents, engineered organisms that displace natural diversity, and so on.

You've missed the GP's point, and created an instance of his observation.

There is almost nothing we do that doesn't have "an incredible potential to do harm", and ubiquitous computational intelligence is one of the most obvious candidates for that fear going... yet hardly anyone is afraid of it.

Ubiquitous computational intelligence (UCI) has the potential to put everyone under constant observation, including position tracking. It has the potential to serve ads to you in your sleep, monitor your caloric intake, keep track and report your alcohol consumption, your masturbation habits... everything. It's Orwell's telescreens on steroids.

Yet the response to such things on /., while sometimes somewhat skeptical, is mostly positive. Relatively minor messing with the genome of some fairly rare creature, on the other hand, brings out the panic, with flat-out bizarre, anti-Darwinian statements like "these things died out for a reason" (posted by an AC above, who makes points similar to yours.)

Sure messing with genomes carries risks, but they are comparable to the risks we take with all kinds of technological development, and yet for some reason people seem a lot more sensitive to them. It may not be explicitly religious, but it sure isn't rational.

Comment Re:It's all about ME, ME, ME. (Score 4, Interesting) 255

IMOHO, one of the reasons that many people think that robots are "hyper-competent" is that too many people think that a program can encompass and accommodate every possible circumstance.

This simply reflects the tendency people have to believe in their own hyper-competence. Most interesting ethical issues are unsolvable in any formal sense by virtue of three simple facts:

1) moral values are ordinal, not cardinal (I value my children's lives more than my cats life, no matter how many cats I have)

2) we value outcomes but choose actions

3) outcomes are related to actions by some more-or-less broad probability distribution.

This means we cannot choose outcomes directly, but we cannot do probability calculations to assign values because ordinals don't support simple arithmetic.

There are two special cases that fortunately cover a lot of every-day life:

A) the probability distribution is narrow enough that we can ignore it, so we can effectively choose outcomes based on our ordinal values

B) there is a market in the outcomes we are choosing between, which allows us to compute cardinal (dollar) values from our ordinals, so we can do probability calculations on the domain.

But interesting moral quandries are simply not computable, so talking about them as if they are even by human beings is to go on a hiding to nowhere.

Comment Re:Bad move (Score 1) 280

Well this Mike Hopkins guy is mostly comparing neutron yields from the D-T reaction LPP were testing with.

Which matters a great deal, because Hopkins shows the device Lerner et al have built behaves well within the limits on such devices. That makes claims of a near-breakthrough much less plausible.

Attempts to generate net power from plasma instabilities have a long and storied history, going back (at least) to the Farnsworth fusor. We know how to get neutrons from such devices, but the goal of net power has remained elusive for decades.

The claim in TFA that "As they leave, the electrons in the beam interact with the electrons in the plasmoid and heat up the area to over 1.8 billion degrees Celsius, which is enough to get fusion reactions" has multiple issues, although some of them may be due to the nature of technology "journalism".

Ignoring that fact that electron-electron interactions are not what you need for plasma heating (electrons, being very light, have trouble transferring much energy to ions) the claim is that they are generating a thermal plasma with a temperature of 1.8E9 C, whereas the conventional explanation would be that the neutrons they are seeing are from beam/plasma interactions. The important fact is that beam/plasma interactions do not scale in a way that would allow them to produce net power, ever.

So it appears they are a) seeing neutrons and claiming b) the neutrons are due to a thermal plasma which given the other parameters they infer must be c) at 1.8E9 C.

Hopkins is pointing out that claim b is unlikely, and that conventional beam/plasma theory can account for their neutrons just fine.

Comment Re:What Could Possibly Go Wrong? (Score 5, Interesting) 74

The neat thing about terminal cancer patients is that the answer is "Not much that would be worse than the alternative."

Conversely, this high bar makes it very difficult to improve on invasive but adequate treatments. Consider mastectomy for early-stage breast cancer: it works pretty well, and that makes it damned near impossible to test any alternative treatment that might work just as well or better, and which would certainly be less invasive.

I worked on a cancer-therapy project once and had the clever idea of applying the technique we were using--which was aimed at something that was incurable at the time--to certain kinds of breast cancer, which was just similar enough to be an interesting candidate for the technique. I talked to a breast cancer researcher and he said, "That's a really clever idea. It sounds plausible. I can't do anything with it." And then explained the above reasoning.

This means that we tend to focus on treatments for currently untreatable cancers, and once we have something that is semi-OK, the rate of improvement goes way down. It doesn't go to zero, by any means, but the incentives shift in a way that is both perfectly logical and kind of perverse.

Comment Re:The Science is settled! (Score 5, Insightful) 330

In the same way that one cannot expect a nice fit between observational studies and the CMIP5 models.

This is a point that is radically misunderstood by almost all sides of the political debate around anthropogenic climate change. Think about what it implies: climate models do not predict observational reality. That, and only that, is why one cannot and should not expect a nice fit between the model and the reality.

This is OK, mind: non-predictive modelling is extremely useful, and there is very little doubt that human activity is adding about 1.6 W/m**2 to the Earth's heat budget (somewhat less than 0.5% of the total, equivalent to an orbital perturbation of about half the distance to the Moon). But climate models do not tell us in any meaningful or useful sense how the ocean/atmosphere system will respond to that additional heating.

There will be a response, but estimating its type, distribution and magnitude well enough to be considered predictive is well beyond current model capabilities. I haven't looked at AR4 or 5 code, but AR2 had approximations that made me cringe, up to and including fixing up energy conservation at the end of each time-step by adjusting cell temperatures.

Climate skeptics--the sane ones at least--are aware of this and take the strong claims of predictive power in the models with a large grain of salt. They also tend to assume that "you can't prove there will be a disaster" means "there won't be a disaster", which is utterly unwarranted.

Climate believers also ignore the poor predictivity of the models, which is unfortunate, because the logical response to that poor predictivity is to invest in robustness and flexibility rather than specific solutions, because we don't know what the specific future conditions will be.

Climate believers also undermine their case by an excessive focus on "abstinence only" policies, and are for some reason unwilling to contemplate any response to climate change that involves things like nuclear power and geo-engineering research. It's almost as if they think the climate-driven destruction of civilization is such a huge issue that we must be willing to do anything to stop it... except change anyone's mind on the relative value of nuclear energy.

Comment Re:From whence the headline? (Score 3, Interesting) 116

And we won't until testing (automated or otherwise) gets better in both places.

I'm skeptical of testing (automated or otherwise), and I think point in TFS is well-taken: testing that would have caught this bug would have involved creating tests that virtually duplicated the system under test.

While some code is susceptible to test-driven development and thorough testing, and that should be done where-ever possible, the resources required to test some code effectively double the total effort required, and maintaining the tests becomes a huge headache. I've worked in heavily-tested environments and spent a significant fraction of my time "fixing" tests that weren't actually failing, but which due to changes in interfaces and design had become out-of-date or inappropriate.

That's not to say that testing can't be done better, but it's clearly a hard problem, and I've yet to see it done well for the kind of code I've worked on over the past 20 years (mostly algorithmic stuff, where the "right" answer is often only properly computable by the algorithm that is supposed to be under test, although there are constraints on correct solutions that can be applied.)

So I'm arguing that a culture of professionalism, that implements best-practices including coding standards and code reviews (possibly automated) that check for simple things like open if statements and unchecked memory access would be lower cost and at least as effective as heavier-weight testing.

This is a static-analysis vs dynamic-analysis argument, and while I certainly agree that dynamic analysis is necessary, both these bugs would have been caught with fairly simple-minded static analyzers checking against well-known coding standards from a decade ago.

Slashdot Top Deals

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...