Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:uncover overlooked or never-considered patterns (Score 1) 17

The deep learning revolution did not solve the problem you claim. What deep learning does is allow more complex piecewise linear functions to be modelled efficiently (if you use relu that is, which is the most popular activation (*)). That's both a blessing and a curse.

What actually happened in the deep learning revolution is that humans solved the problem of designing basic features over many generations of papers and progressively simplified the solution, discovering what is important and what isn't. The algorithms were weeded out until the point we are now, which is that data input is matched to algorithm, in this case the algorithm of choice is of deep learning type. It only looks like deep learning is good for every dataset, but it's not true.

For example, in vision problems, try training a deep network on input that is not in the form of pixels and not in the form of multiple color planes. It will fail miserably, the quality of recognition will be abysmal. That's why data design is so important, you have to know what the strengths of the AI model actually are. In this case, the statistical regularities between neighbouring pixels are what is enabling the CNN layers to extract information. These regularities are an artefact of choosing to stack pixels and colour planes into a rectangular grid. That's solving most of the problem.

Now pixels didn't always exist, they were invented by people quite recently. Try looking up TV technologies of the 1930s and you'll find that it's all about deflecting electron beams. There's really nothing natural about pixels, it's just what our current technologies are based on. And so there's nothing natural about what a deep network does either, it's just a system that has been selected for fitness against our current tech stack, for a handful of high value problem domains. It doesn't imply anything about other problem domains that haven't been studied so intensively.

(*) if you don't use relu but some other smooth activation family for your deep network, then there will always be a close piecewise linear approximation, as these functions are dense. So it's not a big loss of generality to assume relu everywhere.

Comment Re:Up next (Score 2) 52

This is not a problem with AI, it's the inherent design of the models.

Output is effectively a dart thrown at the dartboard, with a wide error distribution. Fixes are outputs thrown at the dartboard, with the same error distribution. It's a stationary process which must reproduce the constant variance throughout the iterations.

The outputs will come arbitrarily close to the target eventually but the number of iterations needed is exponential. In practice, the human asking for another iteration run out of patience and money way too soon.

Comment Re:uncover overlooked or never-considered patterns (Score 1) 17

Sort of, but not quite. AI is not *actually* good at finding patterns. The truth is that AI models depend on humans setting up the problem first, and humans creating the class of features that will uncover patterns easily. This has always been the case since the dawn of time, ca 1958.

To state this another way, AI cannot find patterns if the inputs don't show the patterns clearly. The wildly successful applications of AI to-date have used human insight and experience to narrow down and curate the input signals that have made these successes possible.

If you merely throw an AI model on a dataset that hasn't been carefully thought out, you'll just get garbage. AI models won't find any patterns that actually hold up outsample.

Now to your example: you cannot use just any X and Y coordinates, they have to have semantically meaningful connections to reality. That has to be achieved by the curation and selection of data sets. By the time the AI model sees the X and Y coordinates in your example, the problem is already 90% solved.

Comment Re:No. (Score 1) 26

That wouldn't actually reduce the recipient's labour scanning the messages. The reason we have automatic spam filters is precisely because humans doing the scanning is labour intensive. Getting paid for scanning still requires scanning, so remains labour intensive.

In effect, it was proposing to solve the spam problem by forcing every human being to have a second gig on the side (or pay someone to handle it for them). Either way, the recipients are paying to receive unsolicited messages, with the option of a rebate if they don't like the message.

Thanks for the clarification, I had forgotten some of the details.

Comment Re:Gadget prices used to decrease, not increase. (Score 1) 81

Firstly, the stock market is not the iPhone market. The latter is regulated differently from the former, and obeys different rules, they are not comparable. Thus the indignation is justified.

Secondly, markets always operate on the assumption that participants do not want to pay more, it's an inbuilt assumption that precludes the kind of behavioural symmetry you are invoking in your example. It would be a crazy kind of market if that symmetry was commonplace, for one think it would probably break the theories of equilibrium.

Comment Re:No. (Score 1) 26

Reminds me of the old pay to spam proposals from the first dot com crash. Economics majors seriously proposed that we should change SMTP and MUAs so that spammers could pay (fractions of a penny) to put unsolicited messages in your inbox guaranteed to bypass the filters. You'd have to open the message to collect the cash. The market knows what's best for you, don't you know.

I think those guys ended up at Facebook.

Comment Re:WTF is alpha (Score 1) 35

For the mathematically challenged: the tide lifts all boats (and lowers them). Alpha is how much the boat rises above the tide. When alpha is zero or negative, the stock/boat is not actually rising in a meaningful sense. It's just flapping in the wind, so to speak.

Slashdot Top Deals

Yet magic and hierarchy arise from the same source, and this source has a null pointer.

Working...