Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Confused question... (Score 1) 531

Of course this story is just a troll, but it doesn't even present a coherent question. The affect of an AI having emotions that function like ours has little if anything to do with the silly notion of converting robots ("all your AI are belong to us!").

It seems logical that AI's may well have emotions of sorts since any autonomous entity capable of free will (internal selection among competing actions) needs some basis for selecting it's actions and "maximize X" is certainly the most obvious one. The most obvious way to have robots/AIs behave in a reasonable way is to equip them with "emotions" and have X=pleasure, just like us, with them being "genetically programmed" to gain pleasure from whatever generic activities we want to encourage.

Of course to be functional, emotions can't entirely override rational thought, merely provide an adaptive default, and this will be doubly so in a uber-smart beyond-human AI, so to answer OP's question the impact on "algorithmic decision making" would likely be minimal.

As far as religion goes, an intelligent robot is going to realize that it's own salvation is based on when/whether it gets assigned to the scrap yard and/or whether it's "brain" gets transferred to a new host.. nothing to do with whether it goes to church or professes faith. It will of course be able to guage the way humans react to religion and may form opinions and/or emotions about religion accordingly, and maybe profess faith if it therefore feels that to be beneficial to itself.

Comment Re:Very informative article (Score 1) 71

>> The big error is assuming the the accelleration will continue at the same rate it currently is. It won't.

Or maybe it will.

I don't think technology (and corresponding societal change) will ever happen so fast that it's like engaging warp drive as the term "singularity" seems to imply, but...

The logic of a technological singularity, or at least of accelerating change, is based on HOW/WHY this is going to happen, not just a naÃve extrapolation of what is currently happening.

In particular, it's inevitable from what we now understand about the brain we'll eventually be able to achieve human level AI, and with ongoing advances in our understanding of the brain as well as in neural-net based machine learning, it does seem that this will happen sooner rather than later (in the next 50 years, say, possibly quite a bit sooner).

The logic of the singularity/accelerating change, which seems hard to deny (notwithstanding my warp drive comment) is that once we get to humal level AI, it's going to get beyond (and WAAAAY beyond) human level in a hurry, for a variety of reasons:

1) Throw more compute power at it and it'll think faster/deeper. e.g. Play grandmaster level chess (or geopolitical strategy, or whatever ) with instantaneous response rather than pondering on it.

2) Fusion of intelligence and computer technology. Imagine if your brain had perfect recall and access to the entirety of human knowledge, data, etc. Imagine if your ability to chunk knowledge in 7 +/- pieces was replaced by an ability to reason in way more complex terms.

3) AI will improve itself. The first human level AI (maybe thinking faster via fast hardware, maybe with better memory, etc, etc) can learn about it's own design, the human brain, and just like it's own human creators and design a more powerful AI 2.0, which will design AI 3.0 ...

Now consider the combination of these better and better AI designs running on faster and faster hardware... It's not hard to imagine an acceleration of AI capability.

Now consider this AI not only having the human sensory inputs of vision, hearing, etc, but also growing to include any source of data you care to give it such as the content of every daily newspaper in the world, every daily tweet, the output of every publically accessible webcam, the output of every weather balloon ...

So, a super-human intelligence, running at highly accelerated speed, with the ability to sense (and likely predict via causal relations it has learnt) the entire world...

Now, presumably (as is already happening) folk will be worried about the possibilities and try to put safeguards in place, but humans are fallible and technology advances anyway. All it takes is a few bugs for a sufficiently powerful AI running on a computer somewhere to learn how to hack computer based factories, power stations, weapons systems, household robots, you name it... and if/'when this happens, good luck trying to outwit it to regain control.

Now, this may not all happen at disorientating warp speed, but it'll happen fast enough. Technology in 20-30 years time will look just as much like science fiction as todays would have done 20-30 years ago, but we're reached a point where AI is going to be part of the mix, and because it will be self-improving it's going to happen fast once we get to that point.

Comment Re:Do they actually work well now? (Score 4, Informative) 45

Compute power is only part of the reason for the recent success of neural nets. Other factors include:

- Performance of neural nets increase with the amount of training data you have, almost without limit. Nowadays big datasets are available on the net (plus we have the compute power to handle them).

- We're now able to train deep (multi-layer) nerural nets using backprop whereas it used to be considered almost impossible. It turns out that initialization is critical, as well as various types of data and weight regularization and normalization.

- A variety of training techniques (SGD + momentum, AdaGrad, Nesterov accelerated gradients, etc, etc) have been developed that both accelerate training (large nets can take weeks/months to train) and remove the need for some manual hyperparameter tuning.

- Garbage-In, Garbage Out. You're success in recognition tasks is only going to be as good as the feature representation available to the higher layers of your algorithms (whether conventional or neural net). Another recent advance has been substituting self-learnt feature representations for laboriously hand-designed ones, and the recent there is now a standard neural net recipe of autoencoders+sparsity for implementing this.

- And a whole bunch of other things...

As Newton said "if I have achieved great things it is by standing on the shoulders of giants".. there are all sorts of surprising successes (e.g. language translation) and architectural advances in neural nets that are bringing the whole field up.

These arn't your father's neural nets.

Comment Re:Do they actually work well now? (Score 2) 45

Nowadays (typically deep, convolutional) neural nets are achieving state of the art (i.e. better than any other technique) results in most perception fields such as image recognition, speech recognition, handwriting recognition. For example, Google/Android speech recognition is now neural net based. Neural networks have recently achieved beyond-human accuracy on a large scale image recognition test (ImageNet - a million images covering thousands of categories including fine-grained ones such a as recognizing various breeds of dog, types of flower, etc).

Comment Re:Sweet F A (Score 1) 576

They also don't try to change velocity, or emit EM radiation to sense what's around them, or even emit waste from a power source. If one of those objects lit up a RADAR looking for rocks crossing their path, or fired a thruster big enough to bring an aircraft carrier size craft into Earth orbit, somebody would notice in a big hurry.

Comment Re:Bad Site (Score 1) 252

So you can share every movement and temperature fluctuation with your friends on panterest, obviously!

I bought the "Bobby Flay filter" through in-pan purchase, so it always looks like I'm doing something with steak and blue corn and ancho chiles, even when it's really just mac & cheese from a box.

Comment Re:Linus is right (Score 1) 449

The need for massive parallelism will come (already has in the lab) from future applications generally in the area of machine learning/intelligence.

Saying that "single threaded loads" won't benefit from parallelism is a tautology and anyways irrelevant to Linus's claim.

FWIW I'd challenge you to come up with more than one or two applications that are compute bound and too slow on existing hardware that could NOT be rewritten to take advantage of some degree of parallelism.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

Well, there's obviously no need to add more cores/parallelism until there's a widespread need for it (unless you are Chinese, when octocore is a must!), but I think the need is coming pretty fast.

There are all sorts of cool and useful things you can do with high quality speech, image, etc recognition, natural language processing and AI, and these areas are currently making rapid advances in the lab and slowly starting to trickle out into consumer devices (e.g. speech and natural language support both in iOS and Android).

What is fairly new is that in the lab state of the art results in many of these fields are now coming from deep learning / recurrent neural net architectures rather than traditional approaches (e.g. MFCC + HMM for speech recognition) and these require massive parallelism and compute power. These technologies will continue to migrate to consumer devices as they mature and as the compute requirements become achievable...

Smart devices (eventually *really* smart) are coming, and the process has already started.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

The trouble is that extrapolating the present isn't a great way to predict the future!

If computers were never required to do anything much different than they do right now then of course the processing/memory requirements won't change either.

But... of course things are changing, and one change that has been a long time coming but is finally hitting consumer devices are the hard "fuzzy" problems like speech recognition, image/object recognition, natural language processing, artificial intelligence... and the computing needs of these types of application are way different than running traditional software. We may start with accelarators for state-of-the-art offline speech recognition, but in time (a few decades) I expect we'll have pretty sophisticated AI (think smart assistant) functionality widely available that may shake up hardware requirements more significantly.

Comment Re:Linus is right (Score 1) 449

Yeah, parallel computing is mostly hard the way most of us are trying to do it today, but advances will be driven by need, and advised by past failures, not limited by them.

You also argue against yourself by pointing out that CPU's have hit a speed limit - this is of course precisely why the only way to increase processing power is to use parallelism, and provides added incentive to find ways to make use of parallel hardware easier.

The way massively parallel hardware will be used in the future should be obvious... we'll have domain specific high level libraries that will encapsulate the complexity, just as we do in any other area (and as we do for massively parallel graphics today). Massive parallelism is mostly about SIMD where the programmer basically wants to provide the data ("D") and high level instructruction ("I") and have a high level library take on the donkey work of implementing it on a given platform.

Current parallel computing approaches such as OpenCL, OpenMP, CUDA are all just tools to be used by the library writers or those (which will become increasingly few) whose needs are not met by off-the-shelf high level building blocks. No doubt the tools will get better, but for most programmers it makes no difference as they use libraries rather than write them. Compare for example to all the advances in templates and generic programming in C++11 and later... how many C++ programmers are intimately familiar and proficient in these new facilities, and how many actually need to use them as opposed to enjoying the user-friendly facilities of the STL built atop them?!

Comment Let's see how that sounds in 5-10 years time ... (Score 1) 449

It sounds rather than Bill Gates' [supposed] "64KB is enough for anyone", but no denying that Linus said this one!

Saying that graphics is the only client side app that can utilize large scale parallelism is short sighted bunk, and even ignores what is going on today let alone the future. In 20 years time we'll have handheld devices that would look just as much like science fiction, if available today, as today's devices would have looked 20 years ago.

I have no doubt whatsoever that in the next few decades we'll see human level AI in handheld devices as well as server-based apps, and you better believe that the computing demands (both processing and memory) will be massive. Even today we're starting to see impressive advances in speech and image recognition and the underlying technology is increasingly becoming (massively parallel) connectionist deep learning architectures, not your grandfather's (or Linus's) traditional approaches. Current deep-learning architectures can be optimized to use significantly less resources for recognition-only deployment vs learning, but no doubt we'll see live learning in the future too as AI advances and technology develops.

Linus's relegation of parallelism to server side is equally if not more shortsighted than his lack of vision of client-side CPU-sucking applications! If you want systems that are always available, responsive and scalable then that calls for distributed (client side) implementation, not server based. Future devices are not only going to be smart but the smarts are going to be local. Bye-bye server based Siri.

Comment Re:No group "owns" any day on the calendar. (Score 4, Informative) 681

Close, but no banana.

The Dec 25th date was co-opted from the Roman holiday/feast of Natalis Invictus (= birth of the sun-god Sol Invictus), the date being chosen as it was then (re: procession of the equinoxes) the winter solstice when the days start to get longer again (i.e the sun is reborn). This holiday was created by the Roman emporor Aurelian in the 3rd century AD, and was co-opted by the Christians maybe a 100 years later.

Saturnalia was a separate - very popular - Roman holiday in (if memory serves) November/December, which FWIW had a present giving component.

However, the gross external form of modern Christmas - Tree, Holly, Mistletoe (i.e. general greenery) and Yule log all come from a different, northern European, winter solstice celebration called "Yule".

So, the Xmas feast/date comes from Natalis Invictus, the Tree/Holly/ etc from Yule, the presents *perhaps* from Saturnalia, and we'll have to concede the nativity (there's that "natalis" again) to the Christians, who prior to 300AD would never have celebrated Jesus' birth!

Comment Re:It's not him.. (Score 2) 681

He didn't fold (where's the back down?). He blatently and successfully trolled the Christian fundamentalists**, and his follow-up was little more than a gloat.

** and/or anyone ignorant enough of history to think that Jesus was born on 25th Dec and/or was the basis of the Dec 25th holiday we now call "Christmas"

Comment C++ getting better and better... (Score 1) 641

It seems C++11 was just finalized yesterday, but already we now have C++14 finalized and C++17 in the works....

This is hardly the same language from a few years ago - the power and ease of use that has been added, both for library and application developers is amazing.

Anyone programming in C++ who isn't thoroughly familiar with all the new stuff in C++11 is missing out tremendously...

Slashdot Top Deals

E = MC ** 2 +- 3db

Working...