Become a fan of Slashdot on Facebook


Forgot your password?
Last Chance - Get 15% off sitewide on Slashdot Deals with coupon code "BLACKFRIDAY" (some exclusions apply)". ×

Comment Mixture of approaches works best (Score 2) 75

Usually, you need a mixture of approaches to get things to work. Idealism in software engineering, or in engineering, works about as well as idealism in politics, ie it doesnt really, it misses key points. But, in both areas, it's much easier to create a platform on idealism, and so people who propose one single idealistic viewpoint often do quite well.

In practice, in software engineering, saying 'all tests must be automated, 100%', misses that some things are really hard to test automatically, but can be tested by hand quite simply. Similarly for creating test harnesses, mocking, which this article is the hardware-engineering equivalent for. Sometimes it's easier to mock, and do real 'unit-testing', and sometimes it isnt, and insisting that every project, and every part of every project, uses the exact uniform standard, might not always work as well as it looks like it will in the Powerpoint presentation :-P

Submission + - China builds 57 story skyscraper in 19 days (

hughperkins writes: "This is a video timelapse (skip to 1:55) of a 57-story skyscraper being erected in China in only 19 days. The 2-million-square-foot (180,000-square-meter) building was constructed using prefabricated modules built at a factory and assembled on-site."

Comment Re:so breakthrough (Score 4, Interesting) 142

They're using a standard technique. Convolutional networks started to become big with LeCun's 1998 paper on learning to recognize hand-written digits . His lenet-5 network could identify the digit accurately 99% of the time.

Convolutional networks are starting to become used to play Go, eg 'Move evaluation in Go using Deep Convolutional Neural Networks', by Maddison Huang, Sutskever and Silver, Maddison et al used a 12-layer convolutional network to predict where an expect would move next with 50% accuracy :-)

Progress on convolutional networks moves forward all the time, in an incremental way. If we had one article per day about one increment it would quickly lose mass appeal though :-) The article is about one increment along the way, but does symbolize the massive progress that is being made.

Convolutional networks work well partly because they can take advantage of the massive computional capacity made available in GPU hardware.

Comment Re:Spike boots (Score 3, Informative) 142

Yes, check this out 'High Confidence Predictions for Unrecognizable Images', by Nguyen, Yosinkski and Clune, . It's a paper that shows an image that the net is 99.99% sure is an electric guitar, but looks nothing like :-)

For the technically minded, the paper's authors propose that the reason is that the network is using a discriminative model, rather than a generative model. That means that the network learns a mathematical boundary that separates the images that it sees, in some kind of high-dimensional transformed space. It doesn't learn how to generate such new images, ie, you cant ask it 'draw me an electric guitar' :-) Maybe in a few years :-)

The authors don't compare the network too much with the human brain though, ie, are they saying that the human brain is using a generative model? Is that why the human brain doesn't see a white noise picture, and claim it's a horse?

Comment Re:Even more work for spies! (Score 1) 99

Note that encfs is perfect for this:
- encrypts using AES-256
- easy to use
- works on linux :-)
- and there's at least one app for Android that is compatible with the encryption protocol
- each file still is stored as a single file so:
      -- no issues with losing all your data at once :-)
      -- replication can still be file by file
- works through Fuse, doesn't need admin rights, kernel drivers and stuff :-)

Comment Re:if you can access it on a website (Score 2) 107

You can use a single password, combined with the url of the website, to generate unique passwords for each website, via a hashing algorithm.

One implementation of this is: , which is a derivative of There are other implementations around.

The advantage of this system is:
- only one password to remember
- if a website gets hacked, that password can't be used on other websites, and can't realistically be used to obtain your master password, assuming they even know which algorithm you're using, which is unlikely
- unlike a password safe, you don't need to handle making backups, replicating the backups around, and so on

Comment Re:War! (Score 1) 259

To be fair, as far as 'we're a threat', this includes 'we could become a threat in the future'. Why wait for us to become strong enough to be troublesome to mop-up, when they could mop us up now?

It's a bit like keeping the fridge clean. You don't wait until it grows monsters that will actually attack you. You simply clean the surfaces occasionally, get rid of any traces of mold and stuff.

Comment Re:Geoffrey Hinton (Score 1) 209

There's also a great tutorial by Andrew Ng's group at:

There are two types of deep learning currently by the way:
- restricted Boltzmann machines (RBM)
- sparse auto-encoders

Google / Andrew Ng use sparse auto-encoders. Hinton uses (created) deep RBM networks. They both work in a similar way: each layer learns to reconstruct the input, using a low-dimensional representation. In this way, lower layers build up for example line detectors, and higher levels build up more abstract representations.

Comment Re:Its not winning the Hutter Prize (Score 1) 209

From the task description:

"Restrictions: Must run in 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD"

So, even if you could write an algorithm that fits in a couple of meg, and magically generates awesome feature extraction capabilities, which is kind of what deep learning can do, you'd be excluded from using it in the Hutter prize competition.

For comparison, the Google / Andrew Ng experiment where they got a computer to learn to recognize cats all by itself used a cluster of 16,000 cores (1000 nodes * 16 cores) for 3 days. That's a lot of core-hours, and far exceeds the limitations of the Hutter prize competition.

Comment Re:no (Score 1) 250

Check out Nic's password generator:

I extended it a bit :
- longer passwords generated
- the bookmarklet password field uses password characters
- there's the option of using a bookmarklet with a 'confirm' field
- added a console application (python) which invisibly copies the password to the clipboard, for non-web applications

Some programming languages manage to absorb change, but withstand progress. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982