Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Rationally speaking, why would... (Score 1) 245

There's no reason to presuppose that an AI would want to wipe use out, but equally well no reason to be complacent about the possibility either!

Presumably any AI built by a responsible person will come with a non-threatening set of built-in predispositions (such as human pleasure-seeking), but who's to say what it learns or thinks once it gets out of the factory?

Given our inability to build hack-proof software it seems reasonable to assume that any super-human AI won't be constrained by our own attempts to build in safety controls. Ultimately they, on occasion at least (robot uprisings!), will do whatever they want.

One potential danger is that some AI comes to see us as an existential threat to itself, and thinks that needs to be addressed due to some goal/belief it has come to hold. Or maybe it ultimately regards us as irrelevant and expandable just as we do, say, ants... it may not seek out to eradicate us, but may wreak havoc on occasion as collateral damage (considered unimportant) incurred as part of something it is trying to achieve.

Of course this is all far in the future, but I assume we'll eventually build fully autonomous robots, or maybe just desk-bound super-intelligent assistants that could impact the real world by hacking into power plants, factories and suchlike.

Comment Re: Well it's easy to show superhuman AI is a myth (Score 1) 245

All true, and Google/DeepMind never said that AlphaGo, in of itself, is anything other than a dedicated Go playing program. I wouldn't compare it to a calculator though (or DeepBlue for that matter) since it learned how to do what it does (at this point primarily by playing against itself)... nobody programmed it how to evaluate whether a given Go board position is good or bad.

However... DeepMind are in the business of trying to create general intelligence, and are trying to do so based on reinforcement learning coupled with other ML technologies such as neural nets. It's best to think of AlphaGo as a technology demonstrator of the capabilities DeepMind have developed.

Although they havn't done so, I think it would be trivial for DeepMind to create a single general program that could learn to play a range of board games such as Go, Chess, Checkers.

Perhaps from a generalization point of view, another of DeepMind's technology demonstrators is more impressive, where they've got a single program that, via reinforcement learning, has learnt to play dozens of Atari video games, many at super-human level, given nothing more than the screen pixels and current score as input (with zero knowledge of the game or goals other than attempting to maximize the score).

Comment Re: But how will I trick investors!?! (Score 3) 245

Neural networks are good at generating correlations, but that's about all that they're good for.

No... What a supervised neural net does, in full generality, is to tune a massively parameterized function to minimize some measure of it's output error during the training process. It's basically a back box with a million (or billion) or so knobs on it's side than can be tweaked to define what it does.

During training the net itself learns how to optimally tweak these knobs to make it's output for a given input as close as possible to a target output defined by the training data it was presented with. The nature of neural nets is that they can generalize to unseen inputs outside of the training set.

The main limitation of neural nets is that the function it is optimizing and error measure it is minimizing both need to be differentiable, since the way they learns is by gradient descent (following the error gradient to minimize the error).

The range of problems that neural nets can handle is very large, including things such as speech recognition, language translation, natural-langauge image description, etc. It's a very flexible architecture - there are even neural Turing machines.

No doubt there is too much AI hype at the moment, and too many people equating machine learning (ML) with AI, but the recent advances both in neural nets and reinforcement learning (the ML technology at the heart of AlphaGo) are quite profound.

It remains to be seen how far we get in the next 20 (or whatever) years, but already neural nets are making computers capable of super-human performance in many of the areas they have been applied. The combination of NN + reinforcement learning is significantly more general and powerful, powering additional super-human capabilities such as AlphaGo. Unlike the old chestnut of AI always being 20 years away, AlphaGo stunned researchers by beng capable of something *now* that was estimated to be at least 10 years away!

There's not going to be any one "aha" moment where computers achieve general human-level or beyond intelligence, but rather a whole series of whittling away of things that only humans can do, or do best, until eventually there's nothing left.

Perhaps one of the most profound benefits of neural nets over symbolic approaches is that they learn their own data representations for whatever they are tasked with, and these allow large chunks of functionality to be combined in simplistic lego-like fashion. For example, an image captioning neural net (capable of generating an english-language description of a photo) in it's simplest form is just an image classification net feeding into a language model net... no need to come up with complex data structures to represent image content or sentence syntax and semantics, then figure out how to map from one to the other!

This ability to combine neural nets in lego-like fashion means that advances can be used combinatorial fashion... when we have a bag of tricks similar to what evolution has equipped the human brain with, then the range of problems it can solve (i.e. intelligence level) should be similar. I'd guess that a half-dozen advances is maybe all it will take to get a general-purpose intelligence of some sort, considering that the brain itself only has a limited number of functional areas (cortex, cerebellum, hippocampus, thalamus, basil ganglia, etc).

Comment COBOL programmers aren't all old (Score 1) 372

There's a COBOL shop in my small town that contracts for corporations and the government. I know several COBOL specialists in their 30s. It's actually an extremely lucrative field to get into these days, with good pay and job security.

Rewriting all that COBOL code in some other language would be bound to cause major problems.

Submission + - Buh-bye, H-1B's 1

DogDude writes: From the Washington Post: Trump and Sessions plan to restrict highly skilled foreign workers. Hyderabad says to bring it on.
"Trump has described H-1Bs as a “cheap labor program” subject to “widespread, rampant” abuse. Sessions co-sponsored legislation last year with Sen. Ted Cruz (R-Tex.) to effectively gut the program; Issa, a congressman with Trump’s ear, released a statement Wednesday saying he was reintroducing similar legislation called the Protect and Grow American Jobs Act."

Comment Re:Fighting nebulous "hate speech" will kill them (Score 2) 373

If these companies even tried to end "hate speech" or whatever nebulous crime where a specific group of pigs are more equal than another group of pigs, we will see the end of these platforms and companies full sail.

Banning trolls will hurt their business, how? As an employer, I'm MORE likely to advertise on a platform that wasn't full of screaming, stupid Trump people. Those are not people that I want to advertise to, anyway.

Comment Re:Gibberish (Score 2) 70

Not exactly... A neural net is just a function that takes an input and produces an output. At training time the weights are adjusted (via gradient descent) to minimize the error between the actual and desired output for examples in the training set. The weights are what define the function (via the way data is modified as it flows thru the net), rather than being storage per se.

The goal when training a neural net is to learn the desired data transformation (function) and be able to generalize it to data outside of the training set. If you increase the size of the net (number of parameters) beyond what the training set supports, you'll just end up overfitting - learning the training set rather than learning to generalize, which is undesirable even if you don't care about the computing cost.

The use of external memory in a model such as Google's DNC isn't as an alternative to having a larger model, but rather so the model can be trained to learn a function that utilizes external memory (e.g. as a scratchpad) rather than just being purely flow thru.

Comment Re:Don't know what the "vector" is? (Score 1) 88

The summary is complete gibberish. For anyone interested, Google's own paper describing their NMT architecture is here:

and a Google Reseach blog entry describing it's production rollout (initially for Chinese-English) is here:

The executive summary is that this is a "seq2seq" artificial neural net model using an 8-layer LSTM (variety of recurrent neural network) to encode the source language into a representation vector, and another 8-layer LSTM to decode it into the target language. A lot of the performance improvement is in the details rather than this now-standard seq2seq approach.

The "vector" being discussed doesn't represent words but rather the entire sentence/sequence being translated. This is the amazing thing about these seq2seq architectures - that a variable length sentence can be represented by a fixed length vector!

The representation of words used to feed into this type of seq2seq model is often a wordvec/GloVe embedding (not WordNet), but per the Google paper they are using a sub-word encoding in this case.

Comment Re:Why do Slashdot users continually defend hacker (Score 1) 54

Most of us have come to accept that black hats will never be punished, because on the internet it's very easy to involve multiple unfriendly countries in a crime, and when you put American and Russian agents on the same case it's very hard to get them to stop playing "my country has the biggest dick therefore I'm in charge" and start cooperating to catch the black hat. There's a subtle difference.

Slashdot Top Deals

"Our vision is to speed up time, eventually eliminating it." -- Alex Schure