Link to Original Source
Slashdot videos: Now with more Slashdot!
Link to Original Source
They're using a standard technique. Convolutional networks started to become big with LeCun's 1998 paper on learning to recognize hand-written digits http://yann.lecun.com/exdb/pub... . His lenet-5 network could identify the digit accurately 99% of the time.
Convolutional networks are starting to become used to play Go, eg 'Move evaluation in Go using Deep Convolutional Neural Networks', by Maddison Huang, Sutskever and Silver, http://arxiv.org/pdf/1412.6564... Maddison et al used a 12-layer convolutional network to predict where an expect would move next with 50% accuracy
Progress on convolutional networks moves forward all the time, in an incremental way. If we had one article per day about one increment it would quickly lose mass appeal though
Convolutional networks work well partly because they can take advantage of the massive computional capacity made available in GPU hardware.
Yes, check this out 'High Confidence Predictions for Unrecognizable Images', by Nguyen, Yosinkski and Clune, http://arxiv.org/abs/1412.1897 . It's a paper that shows an image that the net is 99.99% sure is an electric guitar, but looks nothing like
For the technically minded, the paper's authors propose that the reason is that the network is using a discriminative model, rather than a generative model. That means that the network learns a mathematical boundary that separates the images that it sees, in some kind of high-dimensional transformed space. It doesn't learn how to generate such new images, ie, you cant ask it 'draw me an electric guitar'
The authors don't compare the network too much with the human brain though, ie, are they saying that the human brain is using a generative model? Is that why the human brain doesn't see a white noise picture, and claim it's a horse?
"Democracy is the worst form of government, except for all the others" - Winston Churchill
Note that encfs is perfect for this:
- encrypts using AES-256
- easy to use
- works on linux
- and there's at least one app for Android that is compatible with the encryption protocol
- each file still is stored as a single file so:
-- no issues with losing all your data at once
-- replication can still be file by file
- works through Fuse, doesn't need admin rights, kernel drivers and stuff
You can use a single password, combined with the url of the website, to generate unique passwords for each website, via a hashing algorithm.
The advantage of this system is:
- only one password to remember
- if a website gets hacked, that password can't be used on other websites, and can't realistically be used to obtain your master password, assuming they even know which algorithm you're using, which is unlikely
- unlike a password safe, you don't need to handle making backups, replicating the backups around, and so on
To be fair, as far as 'we're a threat', this includes 'we could become a threat in the future'. Why wait for us to become strong enough to be troublesome to mop-up, when they could mop us up now?
It's a bit like keeping the fridge clean. You don't wait until it grows monsters that will actually attack you. You simply clean the surfaces occasionally, get rid of any traces of mold and stuff.
"Asking whether a computer can be intelligent is like asking whether a submarine can swim".
An airplane doesn't flap its wings, but flies faster than birds can.
Submarines don't swim, but they move through the water faster than dolphins.
Not everything has to copy nature exactly in order to be effective.
There's also a great tutorial by Andrew Ng's group at:
There are two types of deep learning currently by the way:
- restricted Boltzmann machines (RBM)
- sparse auto-encoders
Google / Andrew Ng use sparse auto-encoders. Hinton uses (created) deep RBM networks. They both work in a similar way: each layer learns to reconstruct the input, using a low-dimensional representation. In this way, lower layers build up for example line detectors, and higher levels build up more abstract representations.
From the task description:
"Restrictions: Must run in 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD"
So, even if you could write an algorithm that fits in a couple of meg, and magically generates awesome feature extraction capabilities, which is kind of what deep learning can do, you'd be excluded from using it in the Hutter prize competition.
For comparison, the Google / Andrew Ng experiment where they got a computer to learn to recognize cats all by itself used a cluster of 16,000 cores (1000 nodes * 16 cores) for 3 days. That's a lot of core-hours, and far exceeds the limitations of the Hutter prize competition.
Check out Nic's password generator: http://angel.net/~nic/passwd.current.html
I extended it a bit https://github.com/hughperkins/passwordbookmarklet :
- longer passwords generated
- the bookmarklet password field uses password characters
- there's the option of using a bookmarklet with a 'confirm' field
- added a console application (python) which invisibly copies the password to the clipboard, for non-web applications
Whooshy-whoosh! I've always wanted to do that
Specialized content for machine learning / artificial intelligence. I chain-read them for 18 hours till I'd finished!
I get the feeling that many of the comments here are from people who are 30-50, with just a very few exceptions. (I am somewhere in the middle of that range too in fact). Slashdot users are getting older? Where do the 20-somethings hang out?
> 2) Periodic activation every 30 days - this one seriously ticks me off after I've already activated once then wtf?
To save other people from googling, what the parent means is that if you want to play starcraft offline on a particular computer, you must have played starcraft online on that computer in the last 30 days.
I was panicing for a bit, thinking I'd just lost my battle.net profile, since I havent played sc2 for... a while...