Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Submission + - Deep neural networks are easily fooled: Is this Snowcrash for AI? (youtube.com) 1

anguyen8 writes: Deep neural networks (DNNs) trained with Deep Learning have recently produced mind-blowing results in a variety of pattern-recognition tasks, most notably speech recognition, language translation, and recognizing objects in images, where they now perform at near-human levels. But do they see the same way we do?

Nope. Researchers recently found that it is easy to produce images that are completely unrecognizable to humans, but that DNNs classify with near-certainty as everyday objects. For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus. An evolutionary algorithm produced the synthetic images by generating pictures and selecting for those that a DNN believed to be an object (i.e. “survival of the school-bus-iest”). The resulting computer-generated images look like modern, abstract art. The pictures also help reveal what DNNs learn to care about when recognizing objects (e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels), shedding light into the inner workings of these DNN black boxes.

Submission + - Deep Neural Networks are Easily Fooled: Is this Snowcrash for AI? (arxiv.org) 1

An anonymous reader writes: A new paper on deep learning produces snowcrash to hack deep neural networks, producing fascinating images and raising security concerns. The paper is called "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images".

Here is the abstract:

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

Submission + - Is this the End of the Computer Mouse? (dailymail.co.uk)

anguyen8 writes: The computer mouse has had a good run. Almost 70 years since the design was first patented by , it is now under threat from a smart ‘thimble’. The wearable 3D Touch device is fitted with an accelerometer and gyroscope, and lets people control an onscreen mouse using just a wave of their finger.

3DTouch enables users to user their fingers or thumb as a 3D input device with the capability of performing 3D selection translation, and rotation,' explained the researchers. It is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This presents a low-cost solution to designing and implementing such a device.
Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.

'With 3DTouch, we attempted to bring 3D interaction and applications a step closer to users in everyday life.

Video: www.youtube.com/watch?v=QskhBeQ1uDQ
Paper: arxiv.org/abs/1406.5581
Read more: http://www.dailymail.co.uk/sci...
Follow us: @MailOnline on Twitter | DailyMail on Facebook

Comment Re:Fact Check (Score 1) 65

. Control-VR has the 10 finger version of the same idea already working as a prototype. The addition of a laser mouse sensor is new, but why is that worth a paper? .

Control-VR is still in their pre-ordering phase. Similar interfaces like Fin, Ring or other emerging prototypes. These including 3DTouch just come out in the same time!

Comment Re:Not gonna replace the mouse (Score 1) 65

nobody cares to hold sth the mouse is popular because u can get stuff done with minimum energy and high efficiency, add a UP down key on the mouse or keyboard and u got ur 3rd dimension,

So how would you use the mouse in a spatial setting such as the Cave Automatic Virtual Environments (CAVE)?

Comment Re:Already done in India (Score 1) 65

Sorry to break it to you guys, but this has already been done in India...in a more polished form.

There is this thing called Ring as well. First of all, they are all just prototyping not on the market yet. So is 3DTouch. Second of all, 3DTouch serves a different niche market of 3D applications while those two don't.

Comment Re:LEAP Motion (Score 1) 65

And from the summary: "...respond to a set of pre-programmed gestures...", it's where this one will go wrong, too.

I totally agree, even the LEAP allows user-defined gestures. However, for this device "pre-programmed gestures" can always be "re-programmed" as users desire because they are eventually just gestures (not fixed buttons or keys).

Submission + - 3DTouch: A wearable 3D input device for 3D applications (youtube.com)

anguyen8 writes: There are a wide variety of 3D input devices from stationary desktop settings to spatial environments such as large wall-displays and the CAVEs. Desktop input devices such as mouse, joystick, touch-pads possess high precision and responsiveness due to a supporting surface, and incur less fatigue than spatial mid-air input devices like Wiimote or Kinect. However, these desktop devices are not mobile, and can only support relative positioning.

In spatial environments, tracking input devices such as Kinect, OptiTrack, RazorHydra support absolute positioning; however, they require a base reference. Moreover, interaction enabled by these devices is fatiguing. GPS tracking devices support absolute positioning; and are self-contained! However, their accuracy is within feets, not usable for room-sized tracking volume.

With the explosion of 3D applications in everyday life across various 3D platforms, it is necessary to have a self-contained 3D input device that allows user to interact with 3D applications anytime, anywhere! And this device needs to incur less fatigue than mid-air gestures in order for users to practically use over a long time.

We designed 3DTouch, a novel 3D wearable input device, worn on the fingertip for 3D manipulation tasks. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.

Slashdot Top Deals

We all agree on the necessity of compromise. We just can't agree on when it's necessary to compromise. -- Larry Wall