Hey, the condescending article manages to not know what positive predictive values are!
Learned is such a strong word. I teach evidence based medicine, including test evaluation to med students and residents. I also have to try and explain it to practicing physicians.
Nest thermostats don't seem the least bit inexpensive to me. Knowing how to actually build one, they seem to cost right about what the hardware and back end infrastructure would run, plus some boutique-level profit. You could make one for a quarter of the cost without the cloud stuff.
I find R's syntax really annoying for actually doing anything. So I do all the data acquisition, manipulation, etc. in Python and use the RPy2 bridge to just run the actual analysis in R. Best of both worlds.
Human experimentation without review board approval and informed consent violates a number of national and international laws. It doesn't matter whether anyone gets hurt.
Learn to sleep on planes. It changes your life.
I pretty much automatically fall asleep when I sit down now. Usually open my eyes for the takeoff, then fall asleep again well before cruising altitude.
It's actually a challenge staying awake part of the flight while flying west to prevent jet lag.
I've gone out scanning for APs. Recording SSIDs and data packets are COMPLETELY different things. You don't "inadvertently" do the second while doing the first. In fact, actually connecting to the APs just slows your entire operation down.
And there are laws specifically against recording unencrypted signals emanating from someone's house (the wiretap laws in question). What's your point? The taking pictures through your window analogy is pretty much exactly what happened.
Google didn't just scan SSIDs like a regular war driver would, they connected to the APs and recorded traffic. That's not just "oopsie, it was an accident."
Americans don't seem to like independents. They usually don't vote for them, and there seem to be laws to try to discourage them. Perhaps the ones in power want to stay that way so they pass whatever laws they think they can get away with to keep challengers at bay.
Actually, research suggests that once you have enough basic funds to get yourself on the radar, more doesn't make much difference at all. The biggest factor in swaying elections is the candidate's personal attractiveness. Mostly visual.
I'm going to cherry pick a bit on your absentee list. Domestic spying? Your republic will boldly march on. Wait, you don't like the idea of being spied upon in America, land of the free? Is it really that different to propose laws (from an extreme religious motivation) that control women's fertility and ban stem cell research? Or the biggest one of all, put a discriminatory, rights-removing amendment in the constitution of the United States itself?
Sure, nobody actually runs a Turing test. It's too hard. If a real Turing test were ever passed there wouldn't be any dispute. They're ALL restricted versions where the judges go easy on the computers.
The 13 year old gambit isn't the problem though (it's the judges). In fact, it suggests all sorts of strategies for the judges to trip up the computer. I just had a quick conversation with Eugene where I told him a story about a pretty girl asking a guy to go to the movies, and the two of them sitting right at the back. He changed the subject. Obviously not a 13 year old boy.
I agree with you, a proper Turing test is the best, possibly the only way we currently have to assess an AI. But if you have a computer that you think is at the level of a five year old, for example, find some child psychologists and let them talk to it and some real five year olds. Or thirteen year olds. Or adults. Age doesn't matter.
So are assuming that this program is uncooperative because it's supposed to model a 13 year old? I've talked to it. It doesn't seem to be uncooperative. It's only convincing if you ask it simple factual questions though. Anybody who doesn't think intelligence is about memorizing trivia, the very thing the 13 year old gambit is supposed to help explain away, would see through it very quickly.
Over 50% is your test. Turing didn't suggest any such thing. IIRC, in his paper he suggested only that judges not being able to tell the difference between a computer and a person was the criteria for passing. Your criteria requires that the computer beat the person more often than chance.
The slow typer is only a problem because of the artificially limited (for practical purposes) duration of the Turing test. It's not the same situation as a 13 year old. The language problem is similar, if the subject's understanding of the language is so bad you can't ask reasonable questions.
Age is different. In fact, I'd argue that mixing in children and non-native but still reasonably fluent English speakers is a valuable addition to the test. It distracts the judges from things that don't matter like grammar and factual knowledge, leaving them with more of the important things.
Realistically, the actual Turing test is so hard that nobody ever runs it. The chatbots do reasonably well at restricted Turing tests where the judges are focusing on knowledge and sentence parsing, and do horribly at cognitive tasks like understanding a story and forming defensible opinions based on it. The former are things 13 year olds aren't so good at. The latter is something they can do quite well.