Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×
Earth

Global Carbon Dioxide Levels Reach New Monthly Record 372 372

mrflash818 writes: For the first time since we began tracking carbon dioxide in the global atmosphere, the monthly global average concentration of carbon dioxide gas surpassed 400 parts per million in March 2015, according to NOAA's latest results. “It was only a matter of time that we would average 400 parts per million globally,” said Pieter Tans, lead scientist of NOAA’s Global Greenhouse Gas Reference Network. “We first reported 400 ppm when all of our Arctic sites reached that value in the spring of 2012. In 2013 the record at NOAA’s Mauna Loa Observatory first crossed the 400 ppm threshold. Reaching 400 parts per million as a global average is a significant milestone."
The Military

US Asks Vietnam To Stop Russian Bomber Refueling Flights From Cam Ranh Air Base 273 273

HughPickens.com writes Reuters reports that the United States has asked Vietnam to stop letting Russia use its former US base at Cam Ranh Bay to refuel nuclear-capable bombers engaged in shows of strength over the Asia-Pacific region. General Vincent Brooks, commander of the U.S. Army in the Pacific, says the Russian bombers have conducted "provocative" flights, including around the U.S. Pacific Ocean territory of Guam, home to a major American air base. Brooks said the planes that circled Guam were refueled by Russian tankers flying from the strategic bay, which was transformed by the Americans during the Vietnam War into a massive air and naval base. Russia's Defense Ministry confirmed that the airport at Cam Ranh was first used for staging Il-78 tankers for aerial refueling of Tu-95MS bombers in January 2014. Asked about the Russian flights in the region, the State Department official, who spoke on condition of anonymity, said Washington respected Hanoi's right to enter agreements with other countries but added that "we have urged Vietnamese officials to ensure that Russia is not able to use its access to Cam Ranh Bay to conduct activities that could raise tensions in the region."

Cam Ranh is considered the finest deepwater shelter in Southeast Asia. North Vietnamese forces captured Cam Ranh Bay and all of its remaining facilities in 1975. Vietnam's dependence on Russia as the main source of military platforms, equipment, and armaments, has now put Hanoi in a difficult spot. Russia has pressed for special access to Cam Ranh Bay ever since it began delivering enhanced Kilo-class submarines to Vietnam. "Hanoi is invariably cautious and risk adverse in its relations with the major powers," says Carl Thayer. "The current issue of Russian tankers staging out of Cam Ranh pits Russia and China on one side and the United States on the other. There is no easy solution for Vietnam."

Comment: Article accidentally a few words (Score 1) 209 209

Those trying to influence somebody with a good one will have the tricks of a modern mentalist: perfect recall, suggestions for how to curry favor, ease maintaining friendships and influencing strangers

Information is power, think before handing too much of it over to the marketing dudebros.

Comment: Confused question... (Score 1) 531 531

Of course this story is just a troll, but it doesn't even present a coherent question. The affect of an AI having emotions that function like ours has little if anything to do with the silly notion of converting robots ("all your AI are belong to us!").

It seems logical that AI's may well have emotions of sorts since any autonomous entity capable of free will (internal selection among competing actions) needs some basis for selecting it's actions and "maximize X" is certainly the most obvious one. The most obvious way to have robots/AIs behave in a reasonable way is to equip them with "emotions" and have X=pleasure, just like us, with them being "genetically programmed" to gain pleasure from whatever generic activities we want to encourage.

Of course to be functional, emotions can't entirely override rational thought, merely provide an adaptive default, and this will be doubly so in a uber-smart beyond-human AI, so to answer OP's question the impact on "algorithmic decision making" would likely be minimal.

As far as religion goes, an intelligent robot is going to realize that it's own salvation is based on when/whether it gets assigned to the scrap yard and/or whether it's "brain" gets transferred to a new host.. nothing to do with whether it goes to church or professes faith. It will of course be able to guage the way humans react to religion and may form opinions and/or emotions about religion accordingly, and maybe profess faith if it therefore feels that to be beneficial to itself.

Comment: Re:Very informative article (Score 1) 71 71

>> The big error is assuming the the accelleration will continue at the same rate it currently is. It won't.

Or maybe it will.

I don't think technology (and corresponding societal change) will ever happen so fast that it's like engaging warp drive as the term "singularity" seems to imply, but...

The logic of a technological singularity, or at least of accelerating change, is based on HOW/WHY this is going to happen, not just a naÃve extrapolation of what is currently happening.

In particular, it's inevitable from what we now understand about the brain we'll eventually be able to achieve human level AI, and with ongoing advances in our understanding of the brain as well as in neural-net based machine learning, it does seem that this will happen sooner rather than later (in the next 50 years, say, possibly quite a bit sooner).

The logic of the singularity/accelerating change, which seems hard to deny (notwithstanding my warp drive comment) is that once we get to humal level AI, it's going to get beyond (and WAAAAY beyond) human level in a hurry, for a variety of reasons:

1) Throw more compute power at it and it'll think faster/deeper. e.g. Play grandmaster level chess (or geopolitical strategy, or whatever ) with instantaneous response rather than pondering on it.

2) Fusion of intelligence and computer technology. Imagine if your brain had perfect recall and access to the entirety of human knowledge, data, etc. Imagine if your ability to chunk knowledge in 7 +/- pieces was replaced by an ability to reason in way more complex terms.

3) AI will improve itself. The first human level AI (maybe thinking faster via fast hardware, maybe with better memory, etc, etc) can learn about it's own design, the human brain, and just like it's own human creators and design a more powerful AI 2.0, which will design AI 3.0 ...

Now consider the combination of these better and better AI designs running on faster and faster hardware... It's not hard to imagine an acceleration of AI capability.

Now consider this AI not only having the human sensory inputs of vision, hearing, etc, but also growing to include any source of data you care to give it such as the content of every daily newspaper in the world, every daily tweet, the output of every publically accessible webcam, the output of every weather balloon ...

So, a super-human intelligence, running at highly accelerated speed, with the ability to sense (and likely predict via causal relations it has learnt) the entire world...

Now, presumably (as is already happening) folk will be worried about the possibilities and try to put safeguards in place, but humans are fallible and technology advances anyway. All it takes is a few bugs for a sufficiently powerful AI running on a computer somewhere to learn how to hack computer based factories, power stations, weapons systems, household robots, you name it... and if/'when this happens, good luck trying to outwit it to regain control.

Now, this may not all happen at disorientating warp speed, but it'll happen fast enough. Technology in 20-30 years time will look just as much like science fiction as todays would have done 20-30 years ago, but we're reached a point where AI is going to be part of the mix, and because it will be self-improving it's going to happen fast once we get to that point.

Comment: Re:Do they actually work well now? (Score 4, Informative) 45 45

Compute power is only part of the reason for the recent success of neural nets. Other factors include:

- Performance of neural nets increase with the amount of training data you have, almost without limit. Nowadays big datasets are available on the net (plus we have the compute power to handle them).

- We're now able to train deep (multi-layer) nerural nets using backprop whereas it used to be considered almost impossible. It turns out that initialization is critical, as well as various types of data and weight regularization and normalization.

- A variety of training techniques (SGD + momentum, AdaGrad, Nesterov accelerated gradients, etc, etc) have been developed that both accelerate training (large nets can take weeks/months to train) and remove the need for some manual hyperparameter tuning.

- Garbage-In, Garbage Out. You're success in recognition tasks is only going to be as good as the feature representation available to the higher layers of your algorithms (whether conventional or neural net). Another recent advance has been substituting self-learnt feature representations for laboriously hand-designed ones, and the recent there is now a standard neural net recipe of autoencoders+sparsity for implementing this.

- And a whole bunch of other things...

As Newton said "if I have achieved great things it is by standing on the shoulders of giants".. there are all sorts of surprising successes (e.g. language translation) and architectural advances in neural nets that are bringing the whole field up.

These arn't your father's neural nets.

Comment: Re:Do they actually work well now? (Score 2) 45 45

Nowadays (typically deep, convolutional) neural nets are achieving state of the art (i.e. better than any other technique) results in most perception fields such as image recognition, speech recognition, handwriting recognition. For example, Google/Android speech recognition is now neural net based. Neural networks have recently achieved beyond-human accuracy on a large scale image recognition test (ImageNet - a million images covering thousands of categories including fine-grained ones such a as recognizing various breeds of dog, types of flower, etc).

Comment: Re:Sweet F A (Score 1) 576 576

They also don't try to change velocity, or emit EM radiation to sense what's around them, or even emit waste from a power source. If one of those objects lit up a RADAR looking for rocks crossing their path, or fired a thruster big enough to bring an aircraft carrier size craft into Earth orbit, somebody would notice in a big hurry.

Comment: Re:Bad Site (Score 1) 252 252

So you can share every movement and temperature fluctuation with your friends on panterest, obviously!

I bought the "Bobby Flay filter" through in-pan purchase, so it always looks like I'm doing something with steak and blue corn and ancho chiles, even when it's really just mac & cheese from a box.

Comment: Re:Linus is right (Score 1) 449 449

The need for massive parallelism will come (already has in the lab) from future applications generally in the area of machine learning/intelligence.

Saying that "single threaded loads" won't benefit from parallelism is a tautology and anyways irrelevant to Linus's claim.

FWIW I'd challenge you to come up with more than one or two applications that are compute bound and too slow on existing hardware that could NOT be rewritten to take advantage of some degree of parallelism.

Comment: Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449 449

Well, there's obviously no need to add more cores/parallelism until there's a widespread need for it (unless you are Chinese, when octocore is a must!), but I think the need is coming pretty fast.

There are all sorts of cool and useful things you can do with high quality speech, image, etc recognition, natural language processing and AI, and these areas are currently making rapid advances in the lab and slowly starting to trickle out into consumer devices (e.g. speech and natural language support both in iOS and Android).

What is fairly new is that in the lab state of the art results in many of these fields are now coming from deep learning / recurrent neural net architectures rather than traditional approaches (e.g. MFCC + HMM for speech recognition) and these require massive parallelism and compute power. These technologies will continue to migrate to consumer devices as they mature and as the compute requirements become achievable...

Smart devices (eventually *really* smart) are coming, and the process has already started.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...