Note that encfs is perfect for this:
- encrypts using AES-256
- easy to use
- works on linux
- and there's at least one app for Android that is compatible with the encryption protocol
- each file still is stored as a single file so:
-- no issues with losing all your data at once
-- replication can still be file by file
- works through Fuse, doesn't need admin rights, kernel drivers and stuff
Note that encfs is perfect for this:
You can use a single password, combined with the url of the website, to generate unique passwords for each website, via a hashing algorithm.
The advantage of this system is:
- only one password to remember
- if a website gets hacked, that password can't be used on other websites, and can't realistically be used to obtain your master password, assuming they even know which algorithm you're using, which is unlikely
- unlike a password safe, you don't need to handle making backups, replicating the backups around, and so on
To be fair, as far as 'we're a threat', this includes 'we could become a threat in the future'. Why wait for us to become strong enough to be troublesome to mop-up, when they could mop us up now?
It's a bit like keeping the fridge clean. You don't wait until it grows monsters that will actually attack you. You simply clean the surfaces occasionally, get rid of any traces of mold and stuff.
"Asking whether a computer can be intelligent is like asking whether a submarine can swim".
An airplane doesn't flap its wings, but flies faster than birds can.
Submarines don't swim, but they move through the water faster than dolphins.
Not everything has to copy nature exactly in order to be effective.
There's also a great tutorial by Andrew Ng's group at:
There are two types of deep learning currently by the way:
- restricted Boltzmann machines (RBM)
- sparse auto-encoders
Google / Andrew Ng use sparse auto-encoders. Hinton uses (created) deep RBM networks. They both work in a similar way: each layer learns to reconstruct the input, using a low-dimensional representation. In this way, lower layers build up for example line detectors, and higher levels build up more abstract representations.
From the task description:
"Restrictions: Must run in 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD"
So, even if you could write an algorithm that fits in a couple of meg, and magically generates awesome feature extraction capabilities, which is kind of what deep learning can do, you'd be excluded from using it in the Hutter prize competition.
For comparison, the Google / Andrew Ng experiment where they got a computer to learn to recognize cats all by itself used a cluster of 16,000 cores (1000 nodes * 16 cores) for 3 days. That's a lot of core-hours, and far exceeds the limitations of the Hutter prize competition.
Check out Nic's password generator: http://angel.net/~nic/passwd.current.html
I extended it a bit https://github.com/hughperkins/passwordbookmarklet :
- longer passwords generated
- the bookmarklet password field uses password characters
- there's the option of using a bookmarklet with a 'confirm' field
- added a console application (python) which invisibly copies the password to the clipboard, for non-web applications
Whooshy-whoosh! I've always wanted to do that
Specialized content for machine learning / artificial intelligence. I chain-read them for 18 hours till I'd finished!
I get the feeling that many of the comments here are from people who are 30-50, with just a very few exceptions. (I am somewhere in the middle of that range too in fact). Slashdot users are getting older? Where do the 20-somethings hang out?
> 2) Periodic activation every 30 days - this one seriously ticks me off after I've already activated once then wtf?
To save other people from googling, what the parent means is that if you want to play starcraft offline on a particular computer, you must have played starcraft online on that computer in the last 30 days.
I was panicing for a bit, thinking I'd just lost my battle.net profile, since I havent played sc2 for... a while...
Firstly, why is this a nightmare? Who wants extra competition?
Secondly, "technical interview" is a misnomer. They're actually "potential colleague" interviews. Who is going to pick someone who is smarter than them, or who is going to give them competition for promotion?
Those who get through technical interviews are either smart enough to bluff to the interviewer that they're not quite as smart as the interviewer, but an ok guy to hang out with; or are genuinely not as smart or talented as the interviewer, but are an ok guy to hang out with.
Quick tip: when you attend a technical interview, answering the questions correctly doesn't get you the job. Being amazed at how much the interviewer knows does.
They should give concerts and sell t-shirts!
Actually, selling t-shirts isn't a bad idea. Works for xkcd and smbc, I think, and it doesn't look like they sell them already.
I worked on freelancer.com for a few weeks, before getting a job at an investment bank.
During that time, I got a few jobs coming through, and found a regular client.
My approach was:
- don't put the lowest bid: actually people will assume that the low bids are from inexperienced people. Put a reasonable sounding bid, and write a concise bid text, in fluent English, that shows you know about the subject and have read the client's requirements. Ask them questions to clarify points, again showing you read the original text the client wrote
- pick some very narrow field you're really interested in, and that there seems to be a market for, and be really good at that, and market yourself as a specialist in that field. There will be fewer potential jobs arriving, but the chances of being picked for one are I feel much higher, and it's much more satisfying to just submit a handful of bids and get a job, than spend a whole day spraying bids everywhere, and getting nothing.
/me wipes coffee off the keyboard.