Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Errors (Score 4, Insightful) 230

The slightly surprising part is that the misclassified images seem so close to those in the training set.

With emphasis on "slightly". This is a nice piece of work, particularly because it is constructive--it both demonstrates the phenomenon and gives us some idea of how to replicate it. But there is nothing very surprising about demonstrating "non-linear classifiers behave non-linearly."

Everyone who has worked with neural networks has been aware of this from the beginning, and in a way this result is almost a relief: it demonstrates for the first time a phenomenon that most of us were suspicious would be lurking in there somewhere.

The really interesting question is: how dense are the blind spots relative to the correct classification volume? And how big are they? If the blind spots are small and scattered then this will have little practical effect on computer vision (as opposed to image processing) because a simple continuity-of-classification criterion will smooth over them.

Comment Re:Pretty stupid reasoning (Score 1) 405

I could easily charge 8â"10&euro per standard page, so a 200-page novel could easily reach 2000â. And that's just proofreading! Editing would cost much more.

Proofing my novel (344 pages) would have cost $1800 in Canada through a reputable service. I've talked to editors who charge around $1500 and up turning your book into something publishable. That's somewhat less than you're suggesting.

I agree many people are ill-equipped to deal with the costs and skills required for independent publishing... but those same people are also incapable of getting published traditionally.

The distribution looks like this:

|A|aaaaaaaaaa|bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb....

where:

A = traditionally published authors
a = people with the skills/resources to be successful indie authors
b = everyone else

Obviously b >> a, but equally obviously, a >> A.

When focusing only on the undoubted fact that b is a huge population of incompetents, it is important not to lose sight of the equally certain fact that there are many competent, inspired, creative people in the "a" population who for various reasons can't get a leg up in the traditional publishing world.

The question is, "If we have to, are we willing to ditch A for a?"

Personally, I am. I don't think the world would be a poorer place on net if the Charles Stosses of the world had to go back to ditch digging while ten times their number became successful indie author/publishers.

Comment Re:Amazon is short-sighted (Score 1) 405

And out of that 70%, the writer now has to supply their own editors, artwork, proof readers and layout specialists.

Or they need to learn to do it themselves, most of which is not too difficult.

My recent book (DRM free on Amazon: http://www.amazon.com/Darwins-...) cost me $200 to produce (assuming my time is worth nothing), which went to pay for the cover art.

If I had wanted to pay an editor it would have cost around $1500--I looked into this, but decided to play with a combination of early-reader feedback (I have a number of friends whom I trust to tell me when things are crap, and believe me, they did), mechanical editing based on research-grade natural-language processing tools, and semi-automated proofreading (which I wrote my own code for using a variety of heuristics tailored to the kinds of errors I'm particularly prone to making.)

I'm sure there are typos and minor grammar issues remaining, but at a level that is not materially worse than many professionally edited books. And this was a first-pass at this method. I'm sure that with more work the process of editing and proof-reading can be much more highly automated, although nothing short of a full AI will be able to replace first readers for basic feedback.

As to design and layout, anyone with a reasonable level of HTML, CSS and LaTeX experience should be able to produce a decent-looking ebook or print book. There are tricks, but it's not rocket science.

We are in the early days of indie publishing, and things are only going to get better as we automate more processes and lower barriers between authors and readers. There is nothing today stopping a writer from producing a professional-quality book with minimal resources, and that's a good thing.

Comment Re:Alternative Summary (Score 1) 405

Exactly. Stoss is, after all, the author of, "A Score is not an Album", or something equally limited in it's perspective. He apparently believes that there is no indie music, because musicians are not capable of doing all the things that labels do.

While I'm perfectly willing to believe that Charles Stoss is incapable of doing all the things that are required to make an indie author successful--hire or otherwise collaborate with a decent, professional editor, learn or hire out for design and production, pursue marketing opportunities, etc--the claim that no one has those capabilities is obviously absurd.

There will be plenty of books to read in the glorious future... it just won't be Charles Stoss writing them. While that sucks for him, it isn't clear that his product is of a sufficiently novel and irreplaceable kind that anyone will miss him more than they will appreciate the indie authors who would otherwise be shut out by the traditional publishing industry.

He fails particularly on the claim that this process will end by "leaving just Amazon as a monopoly distribution channel retailing the output of an atomized cloud of highly vulnerable self-employed piece-workers like myself."

There was this clever guy who once pointed out the "contradictions in capitalism" who suggested that situations like this were unstable. There was another guy--possibly more clever, as he didn't think world-wide revolution was required to resolve them--who even coined a term for what happens when such an unsatisfactory situation arises: "creative destruction".

Amazon will no-doubt try to build and maintain a monopoly, but already there is an indie ecosystem that is reacting to that and working to create alternative quality and delivery systems. It is still in its infancy, but Amazon is going to lose in the end, because they can't control the delivery process. At worst they will become the iTunes of books: influential, but hardly all-powerful.

Comment Re:Amazon provides a service (Score 1) 218

Amazon sells books. It does not write them or publish them.

This is not strictly true, at least with respect to publishing. Amazon owns CreateSpace, which is a publisher. As such, it is in direct competition with other publishers, or soon will be.

CreateSpace is currently aimed at the indie/print-on-demand market (for example: https://www.createspace.com/47...) but Amazon has expressed an interest in branching out into mainstream publication.

As such, it is positioned to dominate the publishing and distribution vertical completely, and people are worried about this, for good or ill. This story is less about what Amazon is doing today than what they might do tomorrow.

Personally, as an independent author/publisher I'm not too worried: the more restrictive Amazon becomes the more they set up the conditions of their own demise, because they have no way of effectively erecting barriers to entry in the publishing business, particularly in e-books.

No one will be able to make any money at it, but for authors writing hasn't been about money for decades, so this won't change anything except the viability of traditional publishing.

Comment Re:You bet they are "quietly optimistic".. (Score 2) 80

In NM, as in Oz, a lot of fires start as brushfires -- no wood, no particular heat retention -- stop it even briefly and it doesn't get into the forests.

Reading between the lines of TFA this appears an extremely optimistic reading of the research, which so far involves blowing a flame off a propane source, and appears in the long term to be directed at separating flame from tree-tops, with the idea that this will slow down the rate of spread, not put the fire out. It will give emergency services more time to respond by quenching the fast-moving tree-top phase of the fire.

The problem is that there will still be glowing coals on the woody stems, even if they are just brush (unless it is a pure grass fire, which is pretty rare, and even then there are usually bushes with woody stems involved.) Those coals will have the potential to re-ignite the fire, although the goal of slowing things down may be achieved, and it may well be enough to both save lives and to allow firefighters to get things under control.

This is an interference measure, not an extinguisher.

Comment Re:Times sure are changing (Score 5, Insightful) 147

"Messing with life", as you call it, has an incredible potential for doing harm if approached carelessly. It doesn't take much imagination to realize this, either: synthetic infectious agents, engineered organisms that displace natural diversity, and so on.

You've missed the GP's point, and created an instance of his observation.

There is almost nothing we do that doesn't have "an incredible potential to do harm", and ubiquitous computational intelligence is one of the most obvious candidates for that fear going... yet hardly anyone is afraid of it.

Ubiquitous computational intelligence (UCI) has the potential to put everyone under constant observation, including position tracking. It has the potential to serve ads to you in your sleep, monitor your caloric intake, keep track and report your alcohol consumption, your masturbation habits... everything. It's Orwell's telescreens on steroids.

Yet the response to such things on /., while sometimes somewhat skeptical, is mostly positive. Relatively minor messing with the genome of some fairly rare creature, on the other hand, brings out the panic, with flat-out bizarre, anti-Darwinian statements like "these things died out for a reason" (posted by an AC above, who makes points similar to yours.)

Sure messing with genomes carries risks, but they are comparable to the risks we take with all kinds of technological development, and yet for some reason people seem a lot more sensitive to them. It may not be explicitly religious, but it sure isn't rational.

Comment Re:It's all about ME, ME, ME. (Score 4, Interesting) 255

IMOHO, one of the reasons that many people think that robots are "hyper-competent" is that too many people think that a program can encompass and accommodate every possible circumstance.

This simply reflects the tendency people have to believe in their own hyper-competence. Most interesting ethical issues are unsolvable in any formal sense by virtue of three simple facts:

1) moral values are ordinal, not cardinal (I value my children's lives more than my cats life, no matter how many cats I have)

2) we value outcomes but choose actions

3) outcomes are related to actions by some more-or-less broad probability distribution.

This means we cannot choose outcomes directly, but we cannot do probability calculations to assign values because ordinals don't support simple arithmetic.

There are two special cases that fortunately cover a lot of every-day life:

A) the probability distribution is narrow enough that we can ignore it, so we can effectively choose outcomes based on our ordinal values

B) there is a market in the outcomes we are choosing between, which allows us to compute cardinal (dollar) values from our ordinals, so we can do probability calculations on the domain.

But interesting moral quandries are simply not computable, so talking about them as if they are even by human beings is to go on a hiding to nowhere.

Comment Re:Bad move (Score 1) 280

Well this Mike Hopkins guy is mostly comparing neutron yields from the D-T reaction LPP were testing with.

Which matters a great deal, because Hopkins shows the device Lerner et al have built behaves well within the limits on such devices. That makes claims of a near-breakthrough much less plausible.

Attempts to generate net power from plasma instabilities have a long and storied history, going back (at least) to the Farnsworth fusor. We know how to get neutrons from such devices, but the goal of net power has remained elusive for decades.

The claim in TFA that "As they leave, the electrons in the beam interact with the electrons in the plasmoid and heat up the area to over 1.8 billion degrees Celsius, which is enough to get fusion reactions" has multiple issues, although some of them may be due to the nature of technology "journalism".

Ignoring that fact that electron-electron interactions are not what you need for plasma heating (electrons, being very light, have trouble transferring much energy to ions) the claim is that they are generating a thermal plasma with a temperature of 1.8E9 C, whereas the conventional explanation would be that the neutrons they are seeing are from beam/plasma interactions. The important fact is that beam/plasma interactions do not scale in a way that would allow them to produce net power, ever.

So it appears they are a) seeing neutrons and claiming b) the neutrons are due to a thermal plasma which given the other parameters they infer must be c) at 1.8E9 C.

Hopkins is pointing out that claim b is unlikely, and that conventional beam/plasma theory can account for their neutrons just fine.

Comment Re:What Could Possibly Go Wrong? (Score 5, Interesting) 74

The neat thing about terminal cancer patients is that the answer is "Not much that would be worse than the alternative."

Conversely, this high bar makes it very difficult to improve on invasive but adequate treatments. Consider mastectomy for early-stage breast cancer: it works pretty well, and that makes it damned near impossible to test any alternative treatment that might work just as well or better, and which would certainly be less invasive.

I worked on a cancer-therapy project once and had the clever idea of applying the technique we were using--which was aimed at something that was incurable at the time--to certain kinds of breast cancer, which was just similar enough to be an interesting candidate for the technique. I talked to a breast cancer researcher and he said, "That's a really clever idea. It sounds plausible. I can't do anything with it." And then explained the above reasoning.

This means that we tend to focus on treatments for currently untreatable cancers, and once we have something that is semi-OK, the rate of improvement goes way down. It doesn't go to zero, by any means, but the incentives shift in a way that is both perfectly logical and kind of perverse.

Comment Re:The Science is settled! (Score 5, Insightful) 330

In the same way that one cannot expect a nice fit between observational studies and the CMIP5 models.

This is a point that is radically misunderstood by almost all sides of the political debate around anthropogenic climate change. Think about what it implies: climate models do not predict observational reality. That, and only that, is why one cannot and should not expect a nice fit between the model and the reality.

This is OK, mind: non-predictive modelling is extremely useful, and there is very little doubt that human activity is adding about 1.6 W/m**2 to the Earth's heat budget (somewhat less than 0.5% of the total, equivalent to an orbital perturbation of about half the distance to the Moon). But climate models do not tell us in any meaningful or useful sense how the ocean/atmosphere system will respond to that additional heating.

There will be a response, but estimating its type, distribution and magnitude well enough to be considered predictive is well beyond current model capabilities. I haven't looked at AR4 or 5 code, but AR2 had approximations that made me cringe, up to and including fixing up energy conservation at the end of each time-step by adjusting cell temperatures.

Climate skeptics--the sane ones at least--are aware of this and take the strong claims of predictive power in the models with a large grain of salt. They also tend to assume that "you can't prove there will be a disaster" means "there won't be a disaster", which is utterly unwarranted.

Climate believers also ignore the poor predictivity of the models, which is unfortunate, because the logical response to that poor predictivity is to invest in robustness and flexibility rather than specific solutions, because we don't know what the specific future conditions will be.

Climate believers also undermine their case by an excessive focus on "abstinence only" policies, and are for some reason unwilling to contemplate any response to climate change that involves things like nuclear power and geo-engineering research. It's almost as if they think the climate-driven destruction of civilization is such a huge issue that we must be willing to do anything to stop it... except change anyone's mind on the relative value of nuclear energy.

Comment Re:From whence the headline? (Score 3, Interesting) 116

And we won't until testing (automated or otherwise) gets better in both places.

I'm skeptical of testing (automated or otherwise), and I think point in TFS is well-taken: testing that would have caught this bug would have involved creating tests that virtually duplicated the system under test.

While some code is susceptible to test-driven development and thorough testing, and that should be done where-ever possible, the resources required to test some code effectively double the total effort required, and maintaining the tests becomes a huge headache. I've worked in heavily-tested environments and spent a significant fraction of my time "fixing" tests that weren't actually failing, but which due to changes in interfaces and design had become out-of-date or inappropriate.

That's not to say that testing can't be done better, but it's clearly a hard problem, and I've yet to see it done well for the kind of code I've worked on over the past 20 years (mostly algorithmic stuff, where the "right" answer is often only properly computable by the algorithm that is supposed to be under test, although there are constraints on correct solutions that can be applied.)

So I'm arguing that a culture of professionalism, that implements best-practices including coding standards and code reviews (possibly automated) that check for simple things like open if statements and unchecked memory access would be lower cost and at least as effective as heavier-weight testing.

This is a static-analysis vs dynamic-analysis argument, and while I certainly agree that dynamic analysis is necessary, both these bugs would have been caught with fairly simple-minded static analyzers checking against well-known coding standards from a decade ago.

Comment Re:Worth repeating... (Score 4, Insightful) 116

I've often said that you don't fix a software bug until you've fixed the process that allowed the bug to be created.

One of the things that struck me about the goto fail bug was that it was specifically engineered out of coding best practices in the '90's.

Any reasonable coding standard from that time forbade if's without braces for precisely this reason. And yeah, that's a "no true Scotsman" kind of argument (if a coding standard didn't contain such a clause it was not by my definition "reasonable") but the point still holds: software developers at the time were aware of the risk of open if statements causing exactly this kind of failure, because we had observed them in the wild, and designed coding standards to reduce their occurrence.

So to be very specific about what kind of processes and culture would have prevented this bug: a reasonable coding standard and code reviews would have caught it (much of the code review process can be automated these days), and a culture of professionalism is required to implement and maintain such things.

The canonical attribute of professionals is that we worry at least as much about failure as success. We know that failures will happen, and work to reduce them to the bare minimum while still producing working systems under budget and on time (it follows from this that we also care about scheduling and estimation.)

Amateurs look at things like coding standards and reviews and say, "Well what are the odds of that happening! I'm so good it won't ever affect my code!"

Professionals say, "The history of my field shows that certain vulnerabilities are common, and I am human and fallible, so I will put in place simple, lightweight processes to avoid serious failures even when they have low probability, because in a world where millions of lines of code are written every day, a million-to-one bug is written by someone, somewhere with each turn of the Earth, and I'd rather that it wasn't written by me."

It's very difficult to convince amateurs of this, of course, so inculcating professional culture and values is vital.

Comment It depends... (Score 5, Interesting) 209

The degree of molecular similarity in the DNA changes to achieve a particular result will depend strongly on the type of change one is looking at.

For the case of toxin-resistance, which is much closer to the molecular level, the odds of similar changes to the DNA are much higher than for complex morphological changes.

Molecular changes like toxin-resistance are more likely to involve a single gene that codes for a single enzyme, changing the enzyme so that the toxin is no longer metabolized in a harmful way. There are going to be a very limited number of ways to do this because it's pretty close to a one-gene/one-enzyme mapping in many cases.

Morphological changes, on the other hand, involve a whole network of genes that are turned on over the course of development, and the network can be altered in many different ways to get to the same result. Think about it like a road network where you're used to taking a particular route to get from A to B. If a bridge goes out on your your usual route, you may choose different alternatives depending on time of day, the kind of vehicle you drive, etc. Networks create choices.

Even then it will depend on the kind of morphological change we are talking about.

For example, there is a lizard in Mexico, which was studied in the '80's or '90s. There were several related species living inland, and a couple of isolated species on the coast near the Yucatan peninsula. Both the coastal species had an extra cervical (neck) vertebra, and it had been assumed on the basis of this similar morphology that their evolutionary history had been a general migration to the coast, an adaptation to coastal environments that involved having a longer neck, followed by a general die-back that resulted in the two existing but separate populations.

It turns out based on their genes the two coastal species hadn't had a common ancestor for millions or tens of millions of years, and the adaptation to coastal living had happened independently but fairly recently. In this case, because certain aspects of body plan are controlled by a highly conserved and relatively simple set of genes, the additional vertebra were the result of similar sets of genetic changes.

Things like body width, which is what TFA is talking about, are a lot more complicated in their regulation, so more likely to be achieved via different genetic changes that have the same morphological outcome.

I'm going to throw in a shameless plug here because it seems relevant to the topic at hand. I've just published a hard SF novel that's premised on a what-if about the role of mathematics and law-like descriptions in evolution. If you're interested in that sort of thing you should check it out: http://www.amazon.com/Darwins-...

Comment Worms are a poor model (Score 5, Interesting) 66

Humans live insanely long lives for mammals: twice the average. The average mammal lives a billion heartbeats, humans live two billion. "Heartbeats" are a convenient normalization that accounts pretty well for differences in size, etc.

There are fairly plausible evolutionary reasons for this. Grandparents are the primary mechanism by which culture is transmitted, so if your grandparents (or the grandparents of your close kin) lived a long time you would have a better chance of reproducing yourself, assuming cultural knowledge is useful in your local environment. And people with long-lived grandparents tend to be long-lived themselves, so the trait gets selected for.

As such, animal models for human aging are extremely hard to come by, and ones as distant as worms are very unlikely to produce results that are generalizable to humans. This is why so many things cure cancer in rats but have no effect on humans: rats will get cancer from a dirty look, so their cancers tend to be relatively easy to knock over. Cancers that survive all the clever molecular tricks humans throw at them are much harder nuts to crack.

We don't even know if calorie restriction works in humans (not enough people have been starving themselves for long enough to tell) so this article is way, way out on a speculative limb. Good science, I'm sure, but the hook should be "Scientists learn something about metabolic control pathways" and not "You may live forever!"

Slashdot Top Deals

Surprise your boss. Get to work on time.

Working...