mikejuk writes: It all starts with a simple thought, "What if you could throw a paper plane from one screen to another?" From this idea we have Paper Planes, an Android experiment which lets you "throw" a paper plane using your phone to launch it. The plane zooms off the phone screen and appears on the screen of a desktop viewing a website. What you see is your plane join all the planes thrown by other users. You can catch a plane and see where it came from by viewing the passport included when it was built. And you can add your own to show where it has been. It is fun and it is a new sort of social interaction mediated by computer technology. Watch the world throw paper airplanes and discover geography, at least the geography of the most technologically advanced parts of the world. Perhaps the most important idea here is the use of all the screens as one big display. Why not a paper airplane that flies across all the screens in a house or school or whatever? Not only can you get the app free from Play, you can also get the code from Github.
mikejuk writes: The control of what software users can run on their machines is becoming ever tighter. Now Microsoft has announced that only signed drivers will work in the next release of Windows 10. Before you start to panic about backward compatibility with existing drivers the lockdown is only going to be enforced on new installations of Windows 10. If you simply upgrade an existing system then the OS will take over the drivers that are already installed. Only new installations, i.e. installing all drivers from scratch, will enforce the new rules from Windows 10 version 1607. Be warned, if you need to do a fresh install of Windows 10 in the future you might find that your existing drivers are rejected. There's an xkcd for that: https://xkcd.com/1144/
mikejuk writes: Following up a recent Slashdot item on Coursera trashing historically important classic courses and all because they are moving to a new platform with better money making prospects. The good news is that Coursera have "done the right thing" — although it is still a bit confused and with no announcement it could well be missed. Coursera has a list of 90 courses that have transitioned to the new platform since the old one shut on June 30th and it includes 25 Computer Science ones and the all important Hinton course on neural networks. Most of the courses are free but there are no certificates of completion or anything else. While they have specified start dates and cohorts of students will be encouraged to complete them within a set number of weeks, without graded assignments there may not be the same impetus as for the original courses or as for newer courses designed specifically for the new platform.
mikejuk writes: There is a flaw in the Turing test. An AI agent that pleads the 5th can, by remaining silent, convince a judge that it is human and hence pass the test... If you are not rolling on the flaw laughing then you are being impressed by specious argument masquerading as something of academic importance. I kid you not. Mainstream academic institutions have been taken in by this argument, the ACM for example, and a paper has been published in a supposedly academic journal: "Taking the fifth amendment in Turing’s imitation game" Journal of Experimental & Theoretical Artificial Intelligence. But in the words of John McEnroe; "you CANNOT be serious" Does just remaining silent cause a problem for the Turing test? Dr Kevin Warwick thinks so: “However, if an entity can pass the test by remaining silent, this cannot be seen as an indication it is a thinking entity, otherwise objects such as stones or rocks, which clearly do not think, could pass the test. Therefore, we must conclude that ‘taking the Fifth’ fleshes out a serious flaw in the Turing test.” This is, of course, nonsense. If you refuse to sit an exam you don't get to pass it and refusal to sit an exam doesn't invalidate the exam. We all know what the Turing test is supposed to be. It is an operational definition of intelligence and as such the field of discourse has to be wide and not restricted to any single domain — not even the domain of silence. This is the sort of thing that makes that common man, who knows intelligence when he sees it, bemoan the fact that these guys get to live in an ivory tower and waste the resources provided for them.
mikejuk writes: Coursera has announced that 30 June is the date when it will shut down the servers hosting courses that were the first, free, offerings on its platform. The new model isn't just a revised interface, it is also a new monetization model, and presumably the decision to throw out all the original free content, by shutting the platform, is motivated by greedy commercialism. You could say that the golden age of the MOOC is over with the early enthusiastic pioneers doing it because they were passionate about their subject and teaching it being replaced by a bunch of "lets teach a course because its good for my career and ego" with subjects being selected by what will sell. Closing down the old platform is unnecessary destruction of irreplaceable content. Coursera needs to rethink this policy that goes against everything it originally stood for. The courses effected are from the early days of the MOOC that are likely to be important in the history of their subject. The most relevant for us, but far from the only one, is Geoffrey Hinton's Neural Networks for Machine Learning which gave a "deep" insight into the way he thinks and how neural networks work. Making it unavailable is an act of needless cultural and academic vandalism. Hinton is one of the founding fathers of neural networks and deep neural networks, Surely this is a historic document that cannot simply be erased or assigned to some inaccessible digital blackhole. Something has to be done to preserve this important record — they don't have to turn off the servers just because they have a new platform.
mikejuk writes: To celebrate 25 years of VB there is a new request on User Voice, On the 25th anniversary of Classic VB, Return It To Its Programmers Microsoft has been asked a number of times to open source VB6. This has been repeatedly rejected without any real reason being given. However to remove a language from its community without an exceptional reason is an act of vandalism. The new Microsoft claims to back open source, why not in this case? There is no need for Microsoft to do any more work on the code base — simply open source it and allow the community to keep it alive. Don't deride the attempt to make Classic VB open source if you are happy with.NET. There is no doubt C# and VB.NET are sophisticated well designed languages and perhaps you, like me, have no desire to return to VB6 or anything like it. Vote for the proposal not because you want to use VB6 or that you think it is worth having — Vote for it because a company like Microsoft should not take a language away from its users.
mikejuk writes: Recasting movies as if they had been painted by Kandinsky or Picasso is now possible. Deep Neural Networks have been entertaining us with their ability to create dreamlike art and even turn any photo into an artwork in the style of a well known artist. Now a team from the University of Freiburg http://arxiv.org/abs/1604.0861... have applied the basis style transfer technique to video. So now you can really see Star Wars in the style of Kandinsky or any other artist you care to name. This is fun but is it useful?
mikejuk writes: Computational photographic is amazing, but sometimes you have to wonder if it is actually useful and not just amusing. Proving that it is, researchers have found a way to extract high-resolution images from multiple low-resolution images of the Martian surface. These are good enough almost to see the lost Beagle 2 lander clearly. The technique was applied to photos from the HiRISE camera on board the Mars Reconnaissance Orbiter 300km above the red planet's surface. These low-resolution images provide a view of objects as small as 25cm. but by combining eight repeat passes over Gusev Crater, where the Spirit rover left tracks, the resolution could be increased to 5cm. The processing time was in the order of 24 hours for a 2048x1024 tile. Because of the time it takes a full HiRISE image hasn't been processed as yet. This should become possible when the program extended to make use of a GPU. The method was applied to the proposed crash site of the Beagle 2 lander. In case you have forgotten, the Beagle 2 was a novel lander designed to test for life which should have transmitted a signal on Christmas day 2003, but was never heard from. A possible crash site was spotted twelve years later as a bright dot in a HiRISE image. The constructed higher resolution version starts to show the characteristic shape of the space craft.
mikejuk writes: NVIDIA has moved into AI in a big way, both with hardware and software. Now it has implemented an end-to-end neural network approach to driving a car. This is a much bigger breakthrough than winning at Go and raises fundamental questions of what sort of systems we are willing to accept driving cars for us. NVIDIA is reporting the results of its end-to-end self driving car project, called Dave-2, the raw input is simply video of the view of the road and the output is steering wheel angle. The neural network in between learns to steer by being shown videos of a human driving and what the human driver did to the steering wheel as a result. You could say that the network learned to drive by sitting next to a human driver. This is much different than the engineered approach used by Google say where Lidar and highly accurate maps are used to implement if..then rules that formulate how to drive an a precise way. After testing on a simulation, Dave-2 was taken out on the road — the real road. Performance wasn't perfect, but the system did drive the car for 98% of the time leaving the human just 2% of the driving to do. The real issue is not that a neural network is better at driving than the engineers solutions offered by Google but that we really don't know how it does it. A neural network can generalize to situations it has never seen before, something the current crop of self driving if..then.. rules cannot. However we can't reduce the network to a set of clear if..then rules that explain the way it behaves. It might not have a specific "bus detector" but this doesn't mean it will crash into a bus as Google's self driving car did. Do we need to understand a system to have confidence that it will work? If we learn the lessons of traditional buggy software the answer seems to be no.
mikejuk writes: Speculating on what might have happened if Babbage had built his Analytical Engine is fun, but did you ever think that a Victorian computer could implement a neural network and learn to read handwritten digits? Well it can and it's not a joke. So how can you implement a deep neural network on a machine that doesn't exist? The answer is that there is an Analytic engine simulator that runs the instruction set of the original machine. However, a neural network is a big computation and the engine has only 20kbytes of memory and a small instruction set. This didn't stop Adam P. Goucher from doing it. The 412,663 lines of code needed for the program would correspond to a stack of Jacquard loom style cards as tall as the Burj Khalifa in Dubai. After training on 20,000 handwritten digit images the results of testing on 10,000 new images it was achieving a 96.31% accuracy. So Babbage could have made AI possible in Victorian times? Not really. The time it would have taken to process the 412,663 cards is possibly several centuries.
mikejuk writes: Google's AlphaGo has won the Deep Mind Challenge, by winning the third match in a row of five against the 18-time world champion Lee Se-dol. AlphaGo is now the number three Go player in the world and this is an event that will be remembered for a long time. Most AI experts thought that it would take decades to achieve but now we know that we have been on the right track since the 1980s or earlier. AlphaGo makes use of nothing dramatically new — it learned to play Go using a deep neural network and reinforcement learning, both developments on classical AI techniques. We know now that we don't need any big new breakthroughs to get to true AI. The results of the final two games are going to be interesting but as far as AI is concerned the match really is all over.
mikejuk writes: Mozilla has been clarifying some of its plans to convert the Firefox OS project into four IoT based projects. At a casual glance this seems like a naive move that is doomed to failure. We have Project Link — a Siri/Cortana/Alexa clone, Project Sensor Web a distributed data gathering network, Project Smart Home and Project Vaani — an voice interface for IoT. With Firefox losing market share and projects like Firefox OS, Thunderbird, Shumway and Persona closing down perhaps Mozilla should try and find its way back to core concerns. All four of the projects need significant AI expertise and a powerful cloud computing resource neither of which Mozilla is likely to be able to afford.
mikejuk writes: Details of the next in the family of the successful Raspberry Pi family have become available as part of FCC testing documents. The Pi 3 finally includes WiFi and Bluetooth/LE. Comparing the board with the Pi 2 it is clear that most of the electronics has stayed the same. A Raspberry Pi with built in WiFi and Bluetooth puts it directly in competition with the new Linux based Arduinos, Intel's Edison and its derivatives, and with the ESP8266 — a very low cost (about $2) but not well known WiFi board. And of course, it will be in competition with its own stablemates. If the Pi 3 is only a few dollars more than the Pi 2 then it will be the obvious first choice. This would effectively make the Pi Zero, at $5 with no networking, king of the low end and the Pi 3 the choice at the other end of the spectrum. Let's hope they make more than one or two before the launch because the $5 Pi Zero is still out of stock most places three months after being announced and it is annoying a lot of potential users.
mikejuk writes: In its recent earnings call, Yahoo revealed plans to cut its workforce by 15% — around 1,600 employees by the end of the year. Yahoo Labs is another a victim of the cuts as revealed in a Tumbler post by Yoelle Maarek who reports that both Yahoo’s Chief Scientist, Ron Brachman, and VP of Research Ricardo Baeza-Yates, will be leaving the company and that going forward: Our new approach is to integrate research teams directly into our product teams in order to produce innovation that will drive excellence in those product areas. We will also have an independent research team that will work autonomously or in partnership with product partners. The integrated and independent teams, as a whole, will be known as Yahoo Research. Maarek, formerly VP of Research now becomes leader of Yahoo Research. To anyone who has followed the story of research at Yahoo there will be a sense of deja vu. Back in 2012 Yahoo laid off many of its research team, many of whom found a new home with Microsoft. It was Marissa Meyer who in the following year recruited a substantial number of PhDs to Yahoo Labs which initiated some interesting projects. Meyer clearly thought that research might save Yahoo! but now it all seems a bit late and Yahoo! can't save its research lab.
mikejuk writes: No matter how you spin it the Pi Zero is remarkably good value for a one-off or a repeat-production IoT project. It also has one big advantage over similarly priced alternatives — a community and a track record. There are so many Pis out there that it has a stability that any IoT developer will find reassuring. Thus when the Pi Zero at $5 was announced it was a knockout blow for many of its competitors.Suddenly other previously attractive devices simply looked less interesting. The $9 C.H.I.P, the $20 CodeBug and even the free BBC MicroBit lost some of their shine and potential users. But the Pi Zero sold out. The Pi Zero was supposed to be available from November 26, 2015. It is now the start of February and all of the stockists, including the Pi Swag Shop, are still showing out of stock. That's two whole months, and counting, of restricted supply which is more than an initial hiccup. Of course you would expect enough to be made available initially to meet the expected demand. The Pi sells something in the region of 200,000 per month so what do you think the initial run of the Pi Zero actually was? The answer is 20,000 units. Of which 10,000 were stuck to the cover of MagPi and "given away" leaving just 10,000 in the usual distribution channels. And yet Eben Upton, founder of the Raspberry Pi Foundation, commented: "You'd think we'd be used to it by now, but we're always amazed by the level of interest in new Raspberry Pi products," Well yes, you really would think that they might be used to it by now and perhaps even prepared for it. At the time of writing the Pi Zero is still out of stock and when it is briefly in stock customers are limited to one unit. A victim of its own success, yes, but the real victims are the Raspberry Pi's competitors.