Forgot your password?
typodupeerror

+ - Amazon Robot Picking Challenge 2015->

Submitted by mikejuk
mikejuk (1801200) writes "We have all heard the stories about how Amazon treats workers in its fullfilment centers. Well now it seems it wants to do the right thing — and replace all of them by robots.
The Amazon Picking Challenge at ICRA (IEEE Robotics and Automation) 2015 is about getting a robot to perform the picking task. All the robot has to do is pick a list of items from the automated shelves that Amazon uses and place the items into another automated tray ready for delivery. The prizes are $20,000 for the winner, $5000 for second place and $1000 for third place. In addition each team can be awarded up to $6000 to get them and their robot to the conference so that they can participate in the challenge. Amazon is even offering to try to act as matchmaker between robot companies and teams not having the robot hardware they need. A Baxter Research Robot will be made available at the contest.
A robot picker sounds like it could be removing humans from a job that would be much better suited to robots — but then of course, the humans wouldn't have jobs.
We talk a lot in the abstract about the effect that robots have on employment and are very smug about the idea that robots grow the overall job market by creating new jobs in other areas, but here we have a crystal clear situation. The people doing the picking aren't going to be getting jobs that have been created by the robots. The robots will simply take their jobs."

Link to Original Source

+ - Mozilla Labs Closed And Nobody Noticed->

Submitted by mikejuk
mikejuk (1801200) writes "When Google Labs closed there was an outcry. How could an organization just pull the rug from under so many projects?
At least Google announced what it was doing. Mozilla, it seems since there is no official record, just quietly tiptoes away — leaving the lights on since the Mozilla Labs Website is still accessible. It is accessible but when you start to explore the website you notice it is moribund with the last blog post being December 2013 with the penultimate one being September 2013.
The fact that it is gone is confirmed by recent blog posts and by the redeployment of the people who used to run it. The projects that survived have been moved to their own websites. It isn't clear what has happened to the Hatchery -the incubator that invited new ideas from all and sundry.
One of the big advantages of open source is the ease with which a project can be started. One of the big disadvantages of open source is the ease with which projects can be allowed to die — often without any clear cut time of death. It seems Mozilla applies this to groups and initiatives as much as projects. This isn't good."

Link to Original Source

+ - Google's Neural Networks See Even Better ->

Submitted by mikejuk
mikejuk (1801200) writes "Google is becoming as well known for neural networks as the other kind. The annual ImageNet large-scale visual recognition challenge, ILSRVC, is the a testing ground for all manner of computer vision techniques, but recently it has been dominated by convolutional neural networks which are trained to recognize objects simply by being shown lots of examples in photographs.
In 2012 there was a big jump in accuracy when a deep convolutional net designed by Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton proved for the first time that neural networks really did work if you had enough data and enough computing power. This is the neural network that Google has used in its photo search algorithm and, of course, the team they hired to implement it.
This year's competition also brought a jump in performance. Google's GoogLeNet,, yes Goog-le-net, named in honour of LeNet created by Yan LeCun, won the classification and detection challenge while doubling the quality over last year's results. This year the GoogLeNet scored 44% mean average precision compared to the best last year of 23%.
In simple recognition tasks neural nets are as good as humans so a more difficult task has now become the focus of attention. Not only do the nets have to recognize photo of a single object — dog, cat etc., they now have to recognize multiple objects in a photo, a dog with a hat on say, and localize the objects by drawing bounding boxes. This is much harder and tens of thousands of CPU cores were used to train GoogLeNet.
Once nets can recognize individual object and where they are they are well on the road to scene analysis and description — a long-time goal of computer vision systems. A robot with GoogLeNet could with the right higher level software see what was about them."

Link to Original Source

+ - The First Sophisticated Domestic Robot - The Dyson 360 Eye ->

Submitted by mikejuk
mikejuk (1801200) writes "Yes it's a vacuum cleaner! But you knew it would be. The real question is why has it taken so long to make a sophisticated robot to do the menial job of cleaning the floor. The typical Roomba style robot vac runs around at random bumping into things and getting tangled in anything it can find. It is an endearing little machine and once you have owned one the idea of not having one is unthinkable but... it is still a little dim, even for the menial job of cleaning the floor.
Enter the Dyson 360 Eye which was launched last week. This is an upmarket cleaner. Not only does it have a radial root cyclone suction machine it also has, as it's name suggests, 360 degree vision.
A 360 degree panoramic lens lets an infrared sensor see all around. The sensors work in conjunction with a video camera to place objects in the scene. As it moves around it builds a model that is accurate to 5mm. It uses SLAM — Simultaneous Localization And Mapping — which is one mark of an advanced robot. In short — this Dyson knows where it is.
And what is the advantage of this?
Simple — the robot doesn't bump into things and it can clean systematically, which is much more satisfying for a human observer at the very least.
Add to this radial root cyclone suction, tank track to avoid slipping or getting stuck and an iOS and Android app to control it and you have a very desirable floor cleaning robot — but is it overkill? At more than $1000 it will be available early next year and you can pre-order now even if you only want it to hack. See it in action in the video."

Link to Original Source

+ - Wire The Programmer To Prevent Buggy Code ->

Submitted by mikejuk
mikejuk (1801200) writes "Here's a new idea.
You might have heard of aids that keep a driver from falling asleep by detecting how alert they are but what about the same idea applied to programmers. In this case the object isn't to stop a crash, well it sort of is, but a bug.
Microsoft Researcher Andrew Begel, together with academic and industry colleagues have been trying to detect when developers are struggling as they work, in order to prevent bugs before they are introduced into code. A paper presented at the 36th International Conference on Software Engineering, reports on a study conducted with 15 professional programmers to see how well an eye-tracker, an electrodermal activity (EDA) sensor, and an electroencephalography (EEG) sensor could be used to predict whether developers would find a task difficult. Difficult tasks are potential bug generators and finding a task difficult is the programming equivalent of going to sleep at the wheel.
Going beyond this initial investigation researchers now need to decide how to support developers who are finding their work difficult. What isn’t known yet is how developers will react if their actions are approaching bug-potential levels and an intervention is deemed necessary. Presumably the nature of the intervention also has to be worked out. So next time you sit down at your coding station consider that in the future they may be wanting to wire you up just to make sure you aren't a source of bugs. And what could possibly be the intervention?"

Link to Original Source

+ - Researchers Jailbreak iOS 7.1.2 ->

Submitted by mikejuk
mikejuk (1801200) writes "The constant war to jailbreak and patch iOS has taken another step in favor of the jailbreakers. Georgia Tech researchers have found a way to jailbreak the current version of iOS. What the Georgia Tech team has discovered is a way to break in by a multi-step attack. After analysing the patches put in place to stop previous attacks, the team worked out a sequence that would jailbreak any modern iPhone. The team stresses the importance of patching all of the threats, and not just closing one vulnerability and assuming that it renders others unusable as an attack method.
It is claimed that the hack works with any iOS 7.1.2 using device including the iPhone 5s.
It is worth noting that the The Device Freedom Prize (https://isios7jailbrokenyet.com/) for an open source jailbreak of iOS7 is still unclaimed and stands at just over $30,000.
The details are to be revealed at the forthcoming Black Hat USA (August 6 & 7 Las Vegas) in a session titled Exploiting Unpatched iOS Vulnerabilities for Fun and Profit:"

Link to Original Source

+ - Jibo, the friendly helpful robot, nets over $1 million on Indiegogo->

Submitted by mikejuk
mikejuk (1801200) writes "After seven days the Jibo project has over $1.1 million. What is surprising is that Jibo isn't a complex piece of hardware that will do the dishes and pick up clothes. It doesn't move around at all. It just sits and interacts with the family using a camera, microphones and a voice. It is a social robot, the speciality of the founder, MIT's, Cynthia Breazeal. The idea is that this robot will be your friend, take photos, remind you of appointments, order takeaway and tell the kids a story. If you watch the promo video then you can't help but think that this is all too polished and the real thing will fall flat on its face when delivered. If it does work then worry about the hundreds of kids needing psychiatric counselling — shades of Robbie in I, Robot. Even if it is hopelessly hyped — there is a development system and I want one. It is the early days of the home computer all over again."
Link to Original Source

+ - New Raspberry Pi Model B+->

Submitted by mikejuk
mikejuk (1801200) writes "The Raspberry Pi foundation has just announced the Raspberry Pi B+ and the short version is — better and the same price.
With over 2 million sold the news of a RPi upgrade is big news. The basic specs haven't changed much, same BC2835 and 512MB of RAM and the $35 price tag. There are now four USB ports which means you don't need a hub to work with a mouse, keyboard and WiFi dongle. The GPIO has been expanded to 40 pins but don't worry you can plug your old boards and cables into the lefthand part of the connector and its backward compatible. As well as some additional general purpose lines there are two designated for use with I2C EEPROM. When the Pi boots it will look for custom EEPROMs on these lines and optionally use them to load Linux drivers or setup expansion boards. What this means is that expansion boards can now include identity chips that when the board is connected configures the Pi to make use of them — no more manual customization.
The change to a micro SD socket is nice, unless you happen to have lots of spare full size SD cards around. It is also claimed that the power requirements have dropped by half to one watt which brings the model B into the same power consumption area as the model A. This probably still isn't low enough for some applications and the forums are no doubt going to be in full flow working out how to reduce the power even further.
There are some other minor changes, comp video is now available on the audio jack and the audio quality has been improved. But one big step for Raspberry Pi is that it now has four holes for mounting in standard enclosures — this really lets the Pi go anywhere.
http://www.raspberrypi.org/int..."

Link to Original Source

+ - EDSAC Diagrams Rediscovered ->

Submitted by mikejuk
mikejuk (1801200) writes "Due to its importance in the history of computing the UK's Computer Conservation Society embarked on a 4-year project to build a replica of EDSAC. The main challenge facing the team of volunteers who are working on the rebuild is the lack of documentation. There are almost no original design documents remaining so the rebuild volunteers have to scrutinize photographs to puzzle out which bits go where.
However, three years into the project a set of 19 detailed circuit diagrams have come to light and been handed to the EDSAC team by John Loker a former engineer in the University of Cambridge Mathematical Laboratory.
"I started work as an engineer in the Maths Lab in 1959 just after EDSAC had been decommissioned. In a corridor there was a lot of stuff piled up ready to be thrown away, but amongst it I spotted a roll of circuit diagrams for EDSAC. I'm a collector, so I couldn't resist the urge to rescue them. "
In the main the documents confirm that the team has been correct in most of its re-engineering assumptions, but the drawings have thrown up a few surprises. The most significant discrepancy between the original and the reconstruction that the papers reveal is in the "initial orders" (boot ROM in modern terminology). In the absence of fuller information, the reconstruction team had considered and rejected one possibility which was in fact the one that was used by the original engineers. That will now be rectified in the reconstruction which is due for completion in later 2015."

Link to Original Source

+ - Udacity Offers Nanodegrees->

Submitted by mikejuk
mikejuk (1801200) writes "Udacity has announced a new credential designed to appeal to employers and those wanting to embark on a high-tech career. The program will launch with nanodegrees for entry-level Front-End Web Developers, Back-End Web Developers, and Mobile iOS Developers.
In his announcement of this new initiative, which continues the career-readiness theme that distinguishes Udacity from other MOOC providers, Sebastian Thrun describes a nanodegree as delivering:
"a new kind of compact, hands-on, and flexible online curriculum. They are designed to help you effectively learn the most in-demand skills, when you need them, so that you can land your dream job."
The cost of a nanodegree is expected to be about $200 per month and one is expected to take between 6-12 month to complete with a time commitment of 10 hours per week. Scholarships are expected to be available for "underrepresented students""

Link to Original Source

+ - Safari On iOS8 Supports WebGL - At Last!->

Submitted by mikejuk
mikejuk (1801200) writes "The biggest announcement at WWDC has mostly gone unnoticed and uncommented — WebGL support in the Safari browser on both OSX and iOS. Safari is the last browser to give in to the inevitable and offer WebGL — full 3D GPU accelerated graphics in web pages and apps.
Not only is it supported in the browser but in the WebView as well making it possible for web app wrappers such as PhoneGap/Cordova to support WebGL on all platforms.
One possible reason it has taken so long for Apple to recognize that a browser without WebGL is substandard is that it undermines its control of the App Store by allowing web apps that are just as powerful — think 3D games say — to be downloaded and run in the browser. It would be tough for Apple to invent a way to control or profit from freely downloadable web apps. While it might not be the end of the App Store it is a big hole in its walls."

Link to Original Source

+ - Safari On iOS8 Supports WebGL - The New Era Can Now Commence->

Submitted by mikejuk
mikejuk (1801200) writes "The biggest announcement at WWDC has mostly gone unnoticed and uncommented — WebGL support in the Safari browser on OSX and iOS. At last the big browsers all support 3D graphics and web apps and web games in particular are effectively universal.Apple's revolutionary announcement has tended to be overlooked — perhaps because Apple didn't really make a great deal of fuss about it. You might suspect that it isn't that keen for the world to notice that the Safari browser has almost silently joined the growing majority of browsers that support GPU accelerated graphics via WebGL.Not only is it supported in the browser but in WebView as well, which means that native apps that want to show HTML content can now show it including advanced graphics. This also opens up the way for web app wrappers such as PhoneGap/Cordova to support WebGL on all platforms.
One possible reason it has taken so long for Apple to recognize that a browser without WebGL is substandard is that it controls the App store with an iron fist and makes a lot of cash in the process. The danger of WebGL is that it allows the creation of web apps that do as much as a native app. The point is that web apps don't need to be installed and hence they can't be controlled in the way that native apps can.
Is this the end of the app store?"

Link to Original Source

+ - Crowdfund A Film About Grace Hopper->

Submitted by mikejuk
mikejuk (1801200) writes "Born With Curiosity is a proposed biopic about computer pioneer Grace Hopper http://developers.slashdot.org.... With a week to go before it closes on June 7, a crowdfunding campaign on Indigogo https://www.indiegogo.com/proj... has so far raised 94% of its $45,000 target.
Although there have been a couple of books devoted to Grace Hopper and recently was the subject of a Google Doodle, her story hasn't made it to celluloid, which is something that Melissa Pierce finds anomalous, stating on the Born With Curiosity Indigogo page:
"Steve Jobs had 8 films made about him, with another in pre-production! Without Grace Hopper, Steve might have been a door to door calculator salesman! Even with that fact,there isn't one documentary about Grace and her legacy. It's time to change that.""

Link to Original Source

+ - N.S.A. Collecting Millions of Faces From Web Images->

Submitted by Advocatus Diaboli
Advocatus Diaboli (1627651) writes "The National Security Agency is harvesting huge numbers of images of people from communications that it intercepts through its global surveillance operations for use in sophisticated facial recognition programs, according to top-secret documents. The spy agency’s reliance on facial recognition technology has grown significantly over the last four years as the agency has turned to new software to exploit the flood of images included in emails, text messages, social media, videoconferences and other communications, the N.S.A. documents reveal. Agency officials believe that technological advances could revolutionize the way that the N.S.A. finds intelligence targets around the world, the documents show. The agency’s ambitions for this highly sensitive ability and the scale of its effort have not previously been disclosed."
Link to Original Source

+ - The Flaw Lurking In Every Deep Neural Net ->

Submitted by mikejuk
mikejuk (1801200) writes "A recent paper "Intriguing properties of neural networks" by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, http://cs.nyu.edu/~zaremba/doc...
a team that includes authors from Google's deep learning research project outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing.
Every deep neural network has "blind spots" in the sense that there are inputs that are very close to correctly classified examples that are misclassified.
To quote the paper:
"For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network."
To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong.
What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set.
You might be thinking "so what if a cat photo that is clearly a photo a cat is recognized as a dog?" If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road?
There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks.
Put more bluntly "does the human brain have similar built-in errors?" If it doesn't, how is it so different from the neural networks that are trying to mimic it? In short, what is the brain's secret that makes it stable and continuous?
Until we find out more you cannot rely on a neural network in any safety critical system.."

Link to Original Source

In a consumer society there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy.

Working...