Please create an account to participate in the Slashdot moderation system


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:so breakthrough (Score 4, Interesting) 142

by hughperkins (#49078041) Attached to: Breakthrough In Face Recognition Software

They're using a standard technique. Convolutional networks started to become big with LeCun's 1998 paper on learning to recognize hand-written digits . His lenet-5 network could identify the digit accurately 99% of the time.

Convolutional networks are starting to become used to play Go, eg 'Move evaluation in Go using Deep Convolutional Neural Networks', by Maddison Huang, Sutskever and Silver, Maddison et al used a 12-layer convolutional network to predict where an expect would move next with 50% accuracy :-)

Progress on convolutional networks moves forward all the time, in an incremental way. If we had one article per day about one increment it would quickly lose mass appeal though :-) The article is about one increment along the way, but does symbolize the massive progress that is being made.

Convolutional networks work well partly because they can take advantage of the massive computional capacity made available in GPU hardware.

Comment: Re:Spike boots (Score 3, Informative) 142

by hughperkins (#49077989) Attached to: Breakthrough In Face Recognition Software

Yes, check this out 'High Confidence Predictions for Unrecognizable Images', by Nguyen, Yosinkski and Clune, . It's a paper that shows an image that the net is 99.99% sure is an electric guitar, but looks nothing like :-)

For the technically minded, the paper's authors propose that the reason is that the network is using a discriminative model, rather than a generative model. That means that the network learns a mathematical boundary that separates the images that it sees, in some kind of high-dimensional transformed space. It doesn't learn how to generate such new images, ie, you cant ask it 'draw me an electric guitar' :-) Maybe in a few years :-)

The authors don't compare the network too much with the human brain though, ie, are they saying that the human brain is using a generative model? Is that why the human brain doesn't see a white noise picture, and claim it's a horse?

Comment: Re:Even more work for spies! (Score 1) 99

Note that encfs is perfect for this:
- encrypts using AES-256
- easy to use
- works on linux :-)
- and there's at least one app for Android that is compatible with the encryption protocol
- each file still is stored as a single file so:
      -- no issues with losing all your data at once :-)
      -- replication can still be file by file
- works through Fuse, doesn't need admin rights, kernel drivers and stuff :-)

Comment: Re:if you can access it on a website (Score 2) 107

by hughperkins (#45637553) Attached to: Storing Your Encrypted Passwords Offline On a Dedicated Device

You can use a single password, combined with the url of the website, to generate unique passwords for each website, via a hashing algorithm.

One implementation of this is: , which is a derivative of There are other implementations around.

The advantage of this system is:
- only one password to remember
- if a website gets hacked, that password can't be used on other websites, and can't realistically be used to obtain your master password, assuming they even know which algorithm you're using, which is unlikely
- unlike a password safe, you don't need to handle making backups, replicating the backups around, and so on

Comment: Re:War! (Score 1) 259

by hughperkins (#44192441) Attached to: Mystery Intergalactic Radio Bursts Detected

To be fair, as far as 'we're a threat', this includes 'we could become a threat in the future'. Why wait for us to become strong enough to be troublesome to mop-up, when they could mop us up now?

It's a bit like keeping the fridge clean. You don't wait until it grows monsters that will actually attack you. You simply clean the surfaces occasionally, get rid of any traces of mold and stuff.

Comment: Re:Good points (Score 1) 209

by hughperkins (#43662975) Attached to: The New AI: Where Neuroscience and Artificial Intelligence Meet

"Asking whether a computer can be intelligent is like asking whether a submarine can swim".

An airplane doesn't flap its wings, but flies faster than birds can.

Submarines don't swim, but they move through the water faster than dolphins.

Not everything has to copy nature exactly in order to be effective.

Comment: Re:Geoffrey Hinton (Score 1) 209

by hughperkins (#43662929) Attached to: The New AI: Where Neuroscience and Artificial Intelligence Meet

There's also a great tutorial by Andrew Ng's group at:

There are two types of deep learning currently by the way:
- restricted Boltzmann machines (RBM)
- sparse auto-encoders

Google / Andrew Ng use sparse auto-encoders. Hinton uses (created) deep RBM networks. They both work in a similar way: each layer learns to reconstruct the input, using a low-dimensional representation. In this way, lower layers build up for example line detectors, and higher levels build up more abstract representations.

Comment: Re:Its not winning the Hutter Prize (Score 1) 209

by hughperkins (#43662915) Attached to: The New AI: Where Neuroscience and Artificial Intelligence Meet

From the task description:

"Restrictions: Must run in 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD"

So, even if you could write an algorithm that fits in a couple of meg, and magically generates awesome feature extraction capabilities, which is kind of what deep learning can do, you'd be excluded from using it in the Hutter prize competition.

For comparison, the Google / Andrew Ng experiment where they got a computer to learn to recognize cats all by itself used a cluster of 16,000 cores (1000 nodes * 16 cores) for 3 days. That's a lot of core-hours, and far exceeds the limitations of the Hutter prize competition.

Comment: Re:no (Score 1) 250

by hughperkins (#43033021) Attached to: Cryptography 'Becoming Less Important,' Adi Shamir Says

Check out Nic's password generator:

I extended it a bit :
- longer passwords generated
- the bookmarklet password field uses password characters
- there's the option of using a bookmarklet with a 'confirm' field
- added a console application (python) which invisibly copies the password to the clipboard, for non-web applications

Comment: Re:awesome! (Score 1) 131

by hughperkins (#36079504) Attached to: Blizzard Aiming For Q3 <em>Diablo 3</em> Beta, 2011 Release

> 2) Periodic activation every 30 days - this one seriously ticks me off after I've already activated once then wtf?

To save other people from googling, what the parent means is that if you want to play starcraft offline on a particular computer, you must have played starcraft online on that computer in the last 30 days.

I was panicing for a bit, thinking I'd just lost my profile, since I havent played sc2 for... a while...

"Mach was the greatest intellectual fraud in the last ten years." "What about X?" "I said `intellectual'." ;login, 9/1990