writes "When Google Labs closed there was an outcry. How could an organization just pull the rug from under so many projects?
At least Google announced what it was doing. Mozilla, it seems since there is no official record, just quietly tiptoes away — leaving the lights on since the Mozilla Labs Website is still accessible. It is accessible but when you start to explore the website you notice it is moribund with the last blog post being December 2013 with the penultimate one being September 2013.
The fact that it is gone is confirmed by recent blog posts and by the redeployment of the people who used to run it. The projects that survived have been moved to their own websites. It isn't clear what has happened to the Hatchery -the incubator that invited new ideas from all and sundry.
One of the big advantages of open source is the ease with which a project can be started. One of the big disadvantages of open source is the ease with which projects can be allowed to die — often without any clear cut time of death. It seems Mozilla applies this to groups and initiatives as much as projects. This isn't good."Link to Original Source
writes "Google is becoming as well known for neural networks as the other kind. The annual ImageNet large-scale visual recognition challenge, ILSRVC, is the a testing ground for all manner of computer vision techniques, but recently it has been dominated by convolutional neural networks which are trained to recognize objects simply by being shown lots of examples in photographs.
In 2012 there was a big jump in accuracy when a deep convolutional net designed by Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton proved for the first time that neural networks really did work if you had enough data and enough computing power. This is the neural network that Google has used in its photo search algorithm and, of course, the team they hired to implement it.
This year's competition also brought a jump in performance. Google's GoogLeNet,, yes Goog-le-net, named in honour of LeNet created by Yan LeCun, won the classification and detection challenge while doubling the quality over last year's results. This year the GoogLeNet scored 44% mean average precision compared to the best last year of 23%.
In simple recognition tasks neural nets are as good as humans so a more difficult task has now become the focus of attention. Not only do the nets have to recognize photo of a single object — dog, cat etc., they now have to recognize multiple objects in a photo, a dog with a hat on say, and localize the objects by drawing bounding boxes. This is much harder and tens of thousands of CPU cores were used to train GoogLeNet.
Once nets can recognize individual object and where they are they are well on the road to scene analysis and description — a long-time goal of computer vision systems. A robot with GoogLeNet could with the right higher level software see what was about them."Link to Original Source
writes "Yes it's a vacuum cleaner! But you knew it would be. The real question is why has it taken so long to make a sophisticated robot to do the menial job of cleaning the floor. The typical Roomba style robot vac runs around at random bumping into things and getting tangled in anything it can find. It is an endearing little machine and once you have owned one the idea of not having one is unthinkable but... it is still a little dim, even for the menial job of cleaning the floor.
Enter the Dyson 360 Eye which was launched last week. This is an upmarket cleaner. Not only does it have a radial root cyclone suction machine it also has, as it's name suggests, 360 degree vision.
A 360 degree panoramic lens lets an infrared sensor see all around. The sensors work in conjunction with a video camera to place objects in the scene. As it moves around it builds a model that is accurate to 5mm. It uses SLAM — Simultaneous Localization And Mapping — which is one mark of an advanced robot. In short — this Dyson knows where it is.
And what is the advantage of this?
Simple — the robot doesn't bump into things and it can clean systematically, which is much more satisfying for a human observer at the very least.
Add to this radial root cyclone suction, tank track to avoid slipping or getting stuck and an iOS and Android app to control it and you have a very desirable floor cleaning robot — but is it overkill? At more than $1000 it will be available early next year and you can pre-order now even if you only want it to hack. See it in action in the video."Link to Original Source
writes "Here's a new idea.
You might have heard of aids that keep a driver from falling asleep by detecting how alert they are but what about the same idea applied to programmers. In this case the object isn't to stop a crash, well it sort of is, but a bug.
Microsoft Researcher Andrew Begel, together with academic and industry colleagues have been trying to detect when developers are struggling as they work, in order to prevent bugs before they are introduced into code. A paper presented at the 36th International Conference on Software Engineering, reports on a study conducted with 15 professional programmers to see how well an eye-tracker, an electrodermal activity (EDA) sensor, and an electroencephalography (EEG) sensor could be used to predict whether developers would find a task difficult. Difficult tasks are potential bug generators and finding a task difficult is the programming equivalent of going to sleep at the wheel.
Going beyond this initial investigation researchers now need to decide how to support developers who are finding their work difficult. What isn’t known yet is how developers will react if their actions are approaching bug-potential levels and an intervention is deemed necessary. Presumably the nature of the intervention also has to be worked out. So next time you sit down at your coding station consider that in the future they may be wanting to wire you up just to make sure you aren't a source of bugs. And what could possibly be the intervention?"Link to Original Source
writes "The constant war to jailbreak and patch iOS has taken another step in favor of the jailbreakers. Georgia Tech researchers have found a way to jailbreak the current version of iOS. What the Georgia Tech team has discovered is a way to break in by a multi-step attack. After analysing the patches put in place to stop previous attacks, the team worked out a sequence that would jailbreak any modern iPhone. The team stresses the importance of patching all of the threats, and not just closing one vulnerability and assuming that it renders others unusable as an attack method.
It is claimed that the hack works with any iOS 7.1.2 using device including the iPhone 5s.
It is worth noting that the The Device Freedom Prize (https://isios7jailbrokenyet.com/) for an open source jailbreak of iOS7 is still unclaimed and stands at just over $30,000.
The details are to be revealed at the forthcoming Black Hat USA (August 6 & 7 Las Vegas) in a session titled Exploiting Unpatched iOS Vulnerabilities for Fun and Profit:"Link to Original Source
writes "After seven days the Jibo project has over $1.1 million. What is surprising is that Jibo isn't a complex piece of hardware that will do the dishes and pick up clothes. It doesn't move around at all. It just sits and interacts with the family using a camera, microphones and a voice. It is a social robot, the speciality of the founder, MIT's, Cynthia Breazeal. The idea is that this robot will be your friend, take photos, remind you of appointments, order takeaway and tell the kids a story. If you watch the promo video then you can't help but think that this is all too polished and the real thing will fall flat on its face when delivered. If it does work then worry about the hundreds of kids needing psychiatric counselling — shades of Robbie in I, Robot. Even if it is hopelessly hyped — there is a development system and I want one. It is the early days of the home computer all over again."Link to Original Source
writes "The Raspberry Pi foundation has just announced the Raspberry Pi B+ and the short version is — better and the same price.
With over 2 million sold the news of a RPi upgrade is big news. The basic specs haven't changed much, same BC2835 and 512MB of RAM and the $35 price tag. There are now four USB ports which means you don't need a hub to work with a mouse, keyboard and WiFi dongle. The GPIO has been expanded to 40 pins but don't worry you can plug your old boards and cables into the lefthand part of the connector and its backward compatible. As well as some additional general purpose lines there are two designated for use with I2C EEPROM. When the Pi boots it will look for custom EEPROMs on these lines and optionally use them to load Linux drivers or setup expansion boards. What this means is that expansion boards can now include identity chips that when the board is connected configures the Pi to make use of them — no more manual customization.
The change to a micro SD socket is nice, unless you happen to have lots of spare full size SD cards around. It is also claimed that the power requirements have dropped by half to one watt which brings the model B into the same power consumption area as the model A. This probably still isn't low enough for some applications and the forums are no doubt going to be in full flow working out how to reduce the power even further.
There are some other minor changes, comp video is now available on the audio jack and the audio quality has been improved. But one big step for Raspberry Pi is that it now has four holes for mounting in standard enclosures — this really lets the Pi go anywhere.
http://www.raspberrypi.org/int..."Link to Original Source
writes "Due to its importance in the history of computing the UK's Computer Conservation Society embarked on a 4-year project to build a replica of EDSAC. The main challenge facing the team of volunteers who are working on the rebuild is the lack of documentation. There are almost no original design documents remaining so the rebuild volunteers have to scrutinize photographs to puzzle out which bits go where.
However, three years into the project a set of 19 detailed circuit diagrams have come to light and been handed to the EDSAC team by John Loker a former engineer in the University of Cambridge Mathematical Laboratory.
"I started work as an engineer in the Maths Lab in 1959 just after EDSAC had been decommissioned. In a corridor there was a lot of stuff piled up ready to be thrown away, but amongst it I spotted a roll of circuit diagrams for EDSAC. I'm a collector, so I couldn't resist the urge to rescue them. "
In the main the documents confirm that the team has been correct in most of its re-engineering assumptions, but the drawings have thrown up a few surprises. The most significant discrepancy between the original and the reconstruction that the papers reveal is in the "initial orders" (boot ROM in modern terminology). In the absence of fuller information, the reconstruction team had considered and rejected one possibility which was in fact the one that was used by the original engineers. That will now be rectified in the reconstruction which is due for completion in later 2015."Link to Original Source
writes "Udacity has announced a new credential designed to appeal to employers and those wanting to embark on a high-tech career. The program will launch with nanodegrees for entry-level Front-End Web Developers, Back-End Web Developers, and Mobile iOS Developers.
In his announcement of this new initiative, which continues the career-readiness theme that distinguishes Udacity from other MOOC providers, Sebastian Thrun describes a nanodegree as delivering:
"a new kind of compact, hands-on, and flexible online curriculum. They are designed to help you effectively learn the most in-demand skills, when you need them, so that you can land your dream job."
The cost of a nanodegree is expected to be about $200 per month and one is expected to take between 6-12 month to complete with a time commitment of 10 hours per week. Scholarships are expected to be available for "underrepresented students""Link to Original Source
writes "The biggest announcement at WWDC has mostly gone unnoticed and uncommented — WebGL support in the Safari browser on both OSX and iOS. Safari is the last browser to give in to the inevitable and offer WebGL — full 3D GPU accelerated graphics in web pages and apps.
Not only is it supported in the browser but in the WebView as well making it possible for web app wrappers such as PhoneGap/Cordova to support WebGL on all platforms.
One possible reason it has taken so long for Apple to recognize that a browser without WebGL is substandard is that it undermines its control of the App Store by allowing web apps that are just as powerful — think 3D games say — to be downloaded and run in the browser. It would be tough for Apple to invent a way to control or profit from freely downloadable web apps. While it might not be the end of the App Store it is a big hole in its walls."Link to Original Source
writes "The biggest announcement at WWDC has mostly gone unnoticed and uncommented — WebGL support in the Safari browser on OSX and iOS. At last the big browsers all support 3D graphics and web apps and web games in particular are effectively universal.Apple's revolutionary announcement has tended to be overlooked — perhaps because Apple didn't really make a great deal of fuss about it. You might suspect that it isn't that keen for the world to notice that the Safari browser has almost silently joined the growing majority of browsers that support GPU accelerated graphics via WebGL.Not only is it supported in the browser but in WebView as well, which means that native apps that want to show HTML content can now show it including advanced graphics. This also opens up the way for web app wrappers such as PhoneGap/Cordova to support WebGL on all platforms.
One possible reason it has taken so long for Apple to recognize that a browser without WebGL is substandard is that it controls the App store with an iron fist and makes a lot of cash in the process. The danger of WebGL is that it allows the creation of web apps that do as much as a native app. The point is that web apps don't need to be installed and hence they can't be controlled in the way that native apps can.
Is this the end of the app store?"Link to Original Source
writes "Born With Curiosity is a proposed biopic about computer pioneer Grace Hopper http://developers.slashdot.org.... With a week to go before it closes on June 7, a crowdfunding campaign on Indigogo https://www.indiegogo.com/proj... has so far raised 94% of its $45,000 target.
Although there have been a couple of books devoted to Grace Hopper and recently was the subject of a Google Doodle, her story hasn't made it to celluloid, which is something that Melissa Pierce finds anomalous, stating on the Born With Curiosity Indigogo page:
"Steve Jobs had 8 films made about him, with another in pre-production! Without Grace Hopper, Steve might have been a door to door calculator salesman! Even with that fact,there isn't one documentary about Grace and her legacy. It's time to change that.""Link to Original Source
Advocatus Diaboli (1627651)
writes "The National Security Agency is harvesting huge numbers of images of people from communications that it intercepts through its global surveillance operations for use in sophisticated facial recognition programs, according to top-secret documents. The spy agency’s reliance on facial recognition technology has grown significantly over the last four years as the agency has turned to new software to exploit the flood of images included in emails, text messages, social media, videoconferences and other communications, the N.S.A. documents reveal. Agency officials believe that technological advances could revolutionize the way that the N.S.A. finds intelligence targets around the world, the documents show. The agency’s ambitions for this highly sensitive ability and the scale of its effort have not previously been disclosed."Link to Original Source
writes "A recent paper "Intriguing properties of neural networks" by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, http://cs.nyu.edu/~zaremba/doc...
a team that includes authors from Google's deep learning research project outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing.
Every deep neural network has "blind spots" in the sense that there are inputs that are very close to correctly classified examples that are misclassified.
To quote the paper:
"For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network."
To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong.
What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set.
You might be thinking "so what if a cat photo that is clearly a photo a cat is recognized as a dog?" If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road?
There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks.
Put more bluntly "does the human brain have similar built-in errors?" If it doesn't, how is it so different from the neural networks that are trying to mimic it? In short, what is the brain's secret that makes it stable and continuous?
Until we find out more you cannot rely on a neural network in any safety critical system.."Link to Original Source
writes "The nematode worm C. elegans is going where no worm has gone before — into cyberspace. The Open Worm project aims to build a complete and accurate simulation of the first animal to be transferred to code. The most important thing about C.elegans is that it has only 1000 cells and only 302 are neurons. The OpenWorm project aims to create a simulation of the worm working at the level of chemistry making it the first animal to be re-created as software. The project has been going a while but it recently made a successful pitch on Kickstarter for $120,000 to develop the simulation to the point where the neurons control the body of the worm. The rewards offered on KickStarter might strike some as bizarre: T-shirts featuring C.elegans and access to an online version of the simulation called WormSim.
The "why" is because it's the only way to find out if we understand C.elegans but it raises lots of philosophical questions — is the finished simulation alive being the biggest?"Link to Original Source