Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:Gibberish (Score 2) 70

Not exactly... A neural net is just a function that takes an input and produces an output. At training time the weights are adjusted (via gradient descent) to minimize the error between the actual and desired output for examples in the training set. The weights are what define the function (via the way data is modified as it flows thru the net), rather than being storage per se.

The goal when training a neural net is to learn the desired data transformation (function) and be able to generalize it to data outside of the training set. If you increase the size of the net (number of parameters) beyond what the training set supports, you'll just end up overfitting - learning the training set rather than learning to generalize, which is undesirable even if you don't care about the computing cost.

The use of external memory in a model such as Google's DNC isn't as an alternative to having a larger model, but rather so the model can be trained to learn a function that utilizes external memory (e.g. as a scratchpad) rather than just being purely flow thru.

Comment Re:Don't know what the "vector" is? (Score 1) 88

The summary is complete gibberish. For anyone interested, Google's own paper describing their NMT architecture is here:

http://arxiv.org/abs/1609.08144

and a Google Reseach blog entry describing it's production rollout (initially for Chinese-English) is here:

https://research.googleblog.com/2016/09/a-neural-network-for-machine.html

The executive summary is that this is a "seq2seq" artificial neural net model using an 8-layer LSTM (variety of recurrent neural network) to encode the source language into a representation vector, and another 8-layer LSTM to decode it into the target language. A lot of the performance improvement is in the details rather than this now-standard seq2seq approach.

The "vector" being discussed doesn't represent words but rather the entire sentence/sequence being translated. This is the amazing thing about these seq2seq architectures - that a variable length sentence can be represented by a fixed length vector!

The representation of words used to feed into this type of seq2seq model is often a wordvec/GloVe embedding (not WordNet), but per the Google paper they are using a sub-word encoding in this case.

Comment Re:hype from google (Score 1) 33

Yep, the summary is cringe-worthy. Tensor flow is just a framework that lets you easily build multi-step pipelines for processing multi-dimensional matrices (aka tensors). The matrices/tensors flow thru the pipeline, hence the name. The main targeted application is deep neural nets, and there are layers of functionality built into TF for building deep neural nets. There are a number of other preexisting open source frameworks that provide similar functionality. TF appears well designed (very modular, good for research), but it's no game changer.

The Military

US Asks Vietnam To Stop Russian Bomber Refueling Flights From Cam Ranh Air Base 273

HughPickens.com writes Reuters reports that the United States has asked Vietnam to stop letting Russia use its former US base at Cam Ranh Bay to refuel nuclear-capable bombers engaged in shows of strength over the Asia-Pacific region. General Vincent Brooks, commander of the U.S. Army in the Pacific, says the Russian bombers have conducted "provocative" flights, including around the U.S. Pacific Ocean territory of Guam, home to a major American air base. Brooks said the planes that circled Guam were refueled by Russian tankers flying from the strategic bay, which was transformed by the Americans during the Vietnam War into a massive air and naval base. Russia's Defense Ministry confirmed that the airport at Cam Ranh was first used for staging Il-78 tankers for aerial refueling of Tu-95MS bombers in January 2014. Asked about the Russian flights in the region, the State Department official, who spoke on condition of anonymity, said Washington respected Hanoi's right to enter agreements with other countries but added that "we have urged Vietnamese officials to ensure that Russia is not able to use its access to Cam Ranh Bay to conduct activities that could raise tensions in the region."

Cam Ranh is considered the finest deepwater shelter in Southeast Asia. North Vietnamese forces captured Cam Ranh Bay and all of its remaining facilities in 1975. Vietnam's dependence on Russia as the main source of military platforms, equipment, and armaments, has now put Hanoi in a difficult spot. Russia has pressed for special access to Cam Ranh Bay ever since it began delivering enhanced Kilo-class submarines to Vietnam. "Hanoi is invariably cautious and risk adverse in its relations with the major powers," says Carl Thayer. "The current issue of Russian tankers staging out of Cam Ranh pits Russia and China on one side and the United States on the other. There is no easy solution for Vietnam."

Comment Confused question... (Score 1) 531

Of course this story is just a troll, but it doesn't even present a coherent question. The affect of an AI having emotions that function like ours has little if anything to do with the silly notion of converting robots ("all your AI are belong to us!").

It seems logical that AI's may well have emotions of sorts since any autonomous entity capable of free will (internal selection among competing actions) needs some basis for selecting it's actions and "maximize X" is certainly the most obvious one. The most obvious way to have robots/AIs behave in a reasonable way is to equip them with "emotions" and have X=pleasure, just like us, with them being "genetically programmed" to gain pleasure from whatever generic activities we want to encourage.

Of course to be functional, emotions can't entirely override rational thought, merely provide an adaptive default, and this will be doubly so in a uber-smart beyond-human AI, so to answer OP's question the impact on "algorithmic decision making" would likely be minimal.

As far as religion goes, an intelligent robot is going to realize that it's own salvation is based on when/whether it gets assigned to the scrap yard and/or whether it's "brain" gets transferred to a new host.. nothing to do with whether it goes to church or professes faith. It will of course be able to guage the way humans react to religion and may form opinions and/or emotions about religion accordingly, and maybe profess faith if it therefore feels that to be beneficial to itself.

Comment Re:Very informative article (Score 1) 71

>> The big error is assuming the the accelleration will continue at the same rate it currently is. It won't.

Or maybe it will.

I don't think technology (and corresponding societal change) will ever happen so fast that it's like engaging warp drive as the term "singularity" seems to imply, but...

The logic of a technological singularity, or at least of accelerating change, is based on HOW/WHY this is going to happen, not just a naÃve extrapolation of what is currently happening.

In particular, it's inevitable from what we now understand about the brain we'll eventually be able to achieve human level AI, and with ongoing advances in our understanding of the brain as well as in neural-net based machine learning, it does seem that this will happen sooner rather than later (in the next 50 years, say, possibly quite a bit sooner).

The logic of the singularity/accelerating change, which seems hard to deny (notwithstanding my warp drive comment) is that once we get to humal level AI, it's going to get beyond (and WAAAAY beyond) human level in a hurry, for a variety of reasons:

1) Throw more compute power at it and it'll think faster/deeper. e.g. Play grandmaster level chess (or geopolitical strategy, or whatever ) with instantaneous response rather than pondering on it.

2) Fusion of intelligence and computer technology. Imagine if your brain had perfect recall and access to the entirety of human knowledge, data, etc. Imagine if your ability to chunk knowledge in 7 +/- pieces was replaced by an ability to reason in way more complex terms.

3) AI will improve itself. The first human level AI (maybe thinking faster via fast hardware, maybe with better memory, etc, etc) can learn about it's own design, the human brain, and just like it's own human creators and design a more powerful AI 2.0, which will design AI 3.0 ...

Now consider the combination of these better and better AI designs running on faster and faster hardware... It's not hard to imagine an acceleration of AI capability.

Now consider this AI not only having the human sensory inputs of vision, hearing, etc, but also growing to include any source of data you care to give it such as the content of every daily newspaper in the world, every daily tweet, the output of every publically accessible webcam, the output of every weather balloon ...

So, a super-human intelligence, running at highly accelerated speed, with the ability to sense (and likely predict via causal relations it has learnt) the entire world...

Now, presumably (as is already happening) folk will be worried about the possibilities and try to put safeguards in place, but humans are fallible and technology advances anyway. All it takes is a few bugs for a sufficiently powerful AI running on a computer somewhere to learn how to hack computer based factories, power stations, weapons systems, household robots, you name it... and if/'when this happens, good luck trying to outwit it to regain control.

Now, this may not all happen at disorientating warp speed, but it'll happen fast enough. Technology in 20-30 years time will look just as much like science fiction as todays would have done 20-30 years ago, but we're reached a point where AI is going to be part of the mix, and because it will be self-improving it's going to happen fast once we get to that point.

Comment Re:Do they actually work well now? (Score 4, Informative) 45

Compute power is only part of the reason for the recent success of neural nets. Other factors include:

- Performance of neural nets increase with the amount of training data you have, almost without limit. Nowadays big datasets are available on the net (plus we have the compute power to handle them).

- We're now able to train deep (multi-layer) nerural nets using backprop whereas it used to be considered almost impossible. It turns out that initialization is critical, as well as various types of data and weight regularization and normalization.

- A variety of training techniques (SGD + momentum, AdaGrad, Nesterov accelerated gradients, etc, etc) have been developed that both accelerate training (large nets can take weeks/months to train) and remove the need for some manual hyperparameter tuning.

- Garbage-In, Garbage Out. You're success in recognition tasks is only going to be as good as the feature representation available to the higher layers of your algorithms (whether conventional or neural net). Another recent advance has been substituting self-learnt feature representations for laboriously hand-designed ones, and the recent there is now a standard neural net recipe of autoencoders+sparsity for implementing this.

- And a whole bunch of other things...

As Newton said "if I have achieved great things it is by standing on the shoulders of giants".. there are all sorts of surprising successes (e.g. language translation) and architectural advances in neural nets that are bringing the whole field up.

These arn't your father's neural nets.

Comment Re:Do they actually work well now? (Score 2) 45

Nowadays (typically deep, convolutional) neural nets are achieving state of the art (i.e. better than any other technique) results in most perception fields such as image recognition, speech recognition, handwriting recognition. For example, Google/Android speech recognition is now neural net based. Neural networks have recently achieved beyond-human accuracy on a large scale image recognition test (ImageNet - a million images covering thousands of categories including fine-grained ones such a as recognizing various breeds of dog, types of flower, etc).

Comment Re:Linus is right (Score 1) 449

The need for massive parallelism will come (already has in the lab) from future applications generally in the area of machine learning/intelligence.

Saying that "single threaded loads" won't benefit from parallelism is a tautology and anyways irrelevant to Linus's claim.

FWIW I'd challenge you to come up with more than one or two applications that are compute bound and too slow on existing hardware that could NOT be rewritten to take advantage of some degree of parallelism.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

Well, there's obviously no need to add more cores/parallelism until there's a widespread need for it (unless you are Chinese, when octocore is a must!), but I think the need is coming pretty fast.

There are all sorts of cool and useful things you can do with high quality speech, image, etc recognition, natural language processing and AI, and these areas are currently making rapid advances in the lab and slowly starting to trickle out into consumer devices (e.g. speech and natural language support both in iOS and Android).

What is fairly new is that in the lab state of the art results in many of these fields are now coming from deep learning / recurrent neural net architectures rather than traditional approaches (e.g. MFCC + HMM for speech recognition) and these require massive parallelism and compute power. These technologies will continue to migrate to consumer devices as they mature and as the compute requirements become achievable...

Smart devices (eventually *really* smart) are coming, and the process has already started.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

The trouble is that extrapolating the present isn't a great way to predict the future!

If computers were never required to do anything much different than they do right now then of course the processing/memory requirements won't change either.

But... of course things are changing, and one change that has been a long time coming but is finally hitting consumer devices are the hard "fuzzy" problems like speech recognition, image/object recognition, natural language processing, artificial intelligence... and the computing needs of these types of application are way different than running traditional software. We may start with accelarators for state-of-the-art offline speech recognition, but in time (a few decades) I expect we'll have pretty sophisticated AI (think smart assistant) functionality widely available that may shake up hardware requirements more significantly.

Comment Re:Linus is right (Score 1) 449

Yeah, parallel computing is mostly hard the way most of us are trying to do it today, but advances will be driven by need, and advised by past failures, not limited by them.

You also argue against yourself by pointing out that CPU's have hit a speed limit - this is of course precisely why the only way to increase processing power is to use parallelism, and provides added incentive to find ways to make use of parallel hardware easier.

The way massively parallel hardware will be used in the future should be obvious... we'll have domain specific high level libraries that will encapsulate the complexity, just as we do in any other area (and as we do for massively parallel graphics today). Massive parallelism is mostly about SIMD where the programmer basically wants to provide the data ("D") and high level instructruction ("I") and have a high level library take on the donkey work of implementing it on a given platform.

Current parallel computing approaches such as OpenCL, OpenMP, CUDA are all just tools to be used by the library writers or those (which will become increasingly few) whose needs are not met by off-the-shelf high level building blocks. No doubt the tools will get better, but for most programmers it makes no difference as they use libraries rather than write them. Compare for example to all the advances in templates and generic programming in C++11 and later... how many C++ programmers are intimately familiar and proficient in these new facilities, and how many actually need to use them as opposed to enjoying the user-friendly facilities of the STL built atop them?!

Slashdot Top Deals

We were so poor we couldn't afford a watchdog. If we heard a noise at night, we'd bark ourselves. -- Crazy Jimmy

Working...