I never could get the hang of Thursdays.
I never could get the hang of Thursdays.
Hi. I'm a theoretical cryptographer.
Encryption can be broken,
Some implementations have been broken. Encryption itself is generally fine (as long as you go with well-studied, standardized methods). There is a point that encryption is always subject to real-world factors, but the most common libraries are pretty good. Whenever you read about a data breach in the news, it's not because encryption was broken--something else went wrong (and, frequently, exposed data that wasn't encrypted in the first place).
especially the kind that exposes useful information about the plaintext as this one does.
Homomorphic encryption does not expose useful information about the plaintext, although the article doesn't make that clear. You start with an encrypted input, perform an operation, and get an encrypted output. Only the person with the key--who is not the person performing the computation--can decrypt the result.
There is a somewhat-related but distinct concept, called "functional encryption", in which one can distribute a key associated with a function f. That key allows a user to take an encryption of x and obtain f(x)--but nothing else about x other than f(x), where "nothing else" has a mathematical formalization. So you could (conceptually) encrypt your entire medical record and give your doctor a key for the function that calculates the probability that you'll have a heart attack in the next five years. Then they'll be able to calculate that probability, but nothing else about you.
A much simpler alternative is to keep your genetic information in your own control, processing it on your own computer with open source software. You know, just what we already do with other sensitive information like passwords.
This I agree with, in an ideal world. Will we be living in such a world, 5, 10, or 20 years down the line? I don't know. Right now, the trends are largely in outsourcing everything--more and more, your data and computation live on the cloud. For medical information, your doctor doesn't do all the tests himself--he outsources them to a lab. For genetic information, 23andMe doesn't sell software that lets you analyze your own genetic markers--they take your information and perform the analysis on it themselves. So these trends will need to change before the above takes place.
It would be great to keep one's own data and get all the various analysis tools via FOSS. But someone needs to write and distribute those tools--as well as make it feasible to obtain one's own data in the first place (I don't know about you, but I don't have an MRI machine in my house). So until that world exists, homomorphic encryption is a potentially useful tool in this area.
[It also has uses beyond securely outsourcing computation, but that's somewhat off-topic.]
Actually, it's not.
If I had been talking about Swartz, or the case itself, it would be an argument from authority. But as I mentioned at the beginning, I was talking about Abelson.
Various commenters are slamming Abelson for making a comment they disagree with, when they don't have a clue who he is or what work he's done--he isn't saying what the knee-jerk
I'm not arguing that people should agree with Abelson about Swartz. I'm saying that given his history, it might make sense for people to at least give a reasonable look at what he's saying, and if they then disagree with him to address that on the issues, rather than rushing to post inaccurate, sarcastic posts based on a headline.
IT was MIT who insisted on tough ]punishments and wouldn't allow a slap on the wrist.
No, it wasn't, despite what the highly-modded-up comments on
NO, he wasn't naive, his punishment was overblown.
It can't be both?
Well, Hal, if this is what it takes to let you sleep at night despite your and your school's part in Swartz's persecution, have at it. But I doubt too many people are buying it; at this late date pretty much everyone's mind is made up anyway.
Including Slashdotters', apparently. But since you're making this about Abelson rather than Swartz, here are a few facts about the man you're casually brushing off.
Abelson is an old Lisp hacker. He has a long history of standing up for Freedom, in the sense
He has not shied away from standing up for freedom of information, even if there are heavy legal consequences involved.
He also puts his money where his mouth is, releasing a number of his own works for free. Before ebooks were a thing, he made sure his book was available for free online. He helped get OpenCourseWare off the ground. Heck, he's released (under Creative Commons) video of some of his own lectures...from 1986.
He's an expert in the area (in addition to the above personal experience, he also teaches a course on Ethics and Law in the Electronic Frontier). He also spent six months investigating and writing a book-length report about the Swartz case, and MIT's response to it, in particular. The summary describes the report as MIT "clearing itself"--while the report details that MIT did nothing legally wrong, it also goes into the moral and ethical issues of MIT's response without reaching a bright-line conclusion.
So, with all of this as context, which is more likely:
-Abelson is trying to make Swartz look like a bad guy in order that he can "sleep at night", or
-The man with a long history of views and actions supporting freedom of information, with a background in ethics and law on computer-related issues, who quite possibly is the single individual who has done the most thinking about the details of the Swartz case and MIT's response to it (and certainly knows more about it and has thought more about it than any Slashdotter), honestly and genuinely thinks that Swartz was naive about the realities of the situation he got himself into....and maybe, just maybe, it might make sense to give at least a small amount of genuine, honest consideration to his views?
The technology is already on the roads. But aside from the normal amount of time necessary for technology adoption, this also faces significant legal hurdles. There's the big question of liability, of course, and we're starting to deal with that now. But the legal issues will get worse before they get better--self-driving cars are still experimental enough that they aren't a huge political battlefield yet.
Once they develop a bit more, many people will have safety and NIMBY concerns--even if they're much safer than human drivers, many won't want it around without 100% safety. Not to mention other lobby groups--cab drivers, truck drivers, and so forth are heavily unionized, and will use their political sway to oppose this technology as much as possible, since it will (eventually) take their jobs in a very real, direct sense.
There's some needed context.
Aaronson himself works on quantum complexity theory. Much of his work deals with quantum computers (at a conceptual level--what is and isn't possible). Yet there are some people who reject the idea the quantum computers can scale to "useful" sizes--including some very smart people like Leonid Levin (of Cook-Levin Theorem fame)--and some of them send him email, questions, comments on his blog, etc. saying so. These people are essentially asserting that Aaronson's career is rooted in things that can't exist. Thus, Aaronson essentially said "prove it."
It's true that proving such a statement would be very difficult, and you raise some good points as to why. But the context is that Aaronson gets mail and questions all the time from people who simply assert that scalable QC is impossible, and he's challenging them to be more formal about it.
He also mentions, in fairness, that if he does have to pay out, he'd consider it an honor, because it would be a great scientific advance.
Assuming no relationship between decisions is ludicrous. On many items that aren't terribly controversial, Ginsburg and Scalia, for example, would rule similarly just because they are trained judges with a background in US law.
I'd be really surprised if you didn't have a correlation between how one particular justice votes and how the rest of the justices vote.
Last Supreme Court term,
-Almost half of all Supreme Court decisions were unanimous
-The two Justices who disagreed most frequently in judgment were Ginsburg and Alito--and they still agreed with each other noticeably more than half the time (62.5%). Ginsburg and Scalia, in your example, agreed in judgment 65% of the time.
-That said, there is at least some truth to there being a "liberal wing" and a "conservative wing" (with Kennedy being the "swing vote"): of the 16 cases that were decided 5-4, 14 of them were Roberts-Scalia-Thomas-Alito vs. Ginsburg-Breyer-Sotomayor-Kagan with Kennedy casting the deciding vote. But a number of the lineups are more interesting.
The Justices are highly educated professionals, and as such agree with each other a lot of the time about what the law actually says. None of them is blindly ideological--but just the same, they do have their individual opinions about how the law should be interpreted, so some level of ideology is certainly present.
It's not just "the ones who fail the metal detector" who get pat-downs, and that's not what the article is about. The TSA is increasingly using backscatter x-ray machines; if they decide to put you through one of those, you can opt to get a manual pat-down instead. This is the category of people we're talking about; they are trying to get more people to choose the backscatter x-ray by making the manual search more uncomfortable.
As for there not being enough scanners, TFA says "Agents were funneling every passenger at this particular checkpoint through a newly installed back-scatter body imaging device." I can confirm this; the last few times I've been to Logan Airport in Boston, they were putting every adult through the scanner. (They allowed a few small children to go through the metal detector instead.) Perhaps this is true only at some airports or only at non-peak times, but there are certainly situations where everyone gets funneled to the backscatter machine, and opt-outs get patted down.
The second time this happened to me, the TSA agent announced that we would go through the scanner, and didn't mention that anyone had the option to get a manual pat-down instead. When I politely requested to opt out of the scanner, the TSA agent kept trying to talk me out of it, repeatedly asking why I wanted a pat-down, informing me that it would be degrading, etc., before finally allowing it. (Honestly, one of the reasons I wanted to request a pat-down was so that other people knew it was an option!)
it would be more work to re-engineer somebody else's code to avoid detection than to just write it from scratch.
Very true. I've been a TA in CS at a well-known university, and it's surprising how many students don't realize how easy it is to catch cheaters.
Much of our work is cooperative--either explicitly group work, or of the "you can talk with friends about the ideas or help them debug, but write it up yourself" variety. In addition, the TAs are available for any questions (and don't mind helping you--really!). So, it's really not that hard to do the work honestly and do ok. Maybe you won't do great, but you'll do ok.
But I never saw the people who ended up cheating during office hours, or saw other signs that they were putting forth any effort to actually learn. So I don't think cheating is a matter of ability so much as laziness. The issue, as parent rightly points out, is that while it's certainly possible to cheat in an undetectable manner, doing so requires at least as much work as doing the assignment. And if someone is cheating due to laziness in the first place, they often don't put much effort into cheating, and it's very, very easy to catch them.
If you are too stupid to realize that when you hand in plagiarized code, you aren't taking a *risk* that you will be caught, you are engaging in the certainty that you will be caught, then you don't deserve to be at a university of this caliber.
I'd agree, but sadly the full consequences don't always filter through, because departments and institutions make it hard. One of my students once blatantly cheated on a large final project. As a TA, I would have supported very harsh penalties. I was a bit let down when the professor gave a lesser penalty...and mentioned the reason for it to me: ultimately, if the professor tried to institute a penalty with long-lasting academic effects, it would mean a ton of paperwork and annoyance on the professor's part. I don't blame the professor for this (since I know how much he really had going on at the time), but I think the department should have made it a bit easier for people to deal with cheaters.
Install the "oldbar" add-on.
In any discussion of the awesome bar, someone always mentions oldbar, and someone else always mods it informative.
The description of oldbar is "oldbar makes the location (URL) bar look like Firefox 2" (emphasis added). It does not change functionality, only appearance. rantingkitten--and I--have complaints about the functionality first and foremost. oldbar doesn't help.
I could go through a litany of complaints about the actual functionality (ridiculous prioritization decisions, various forms of nondeterminism that don't make consistent sense even if you accept the prioritization decisions, etc.), but ultimately my complaint is the same as grandparent's: There's no way to turn it off. And by "turn it off" I don't mean "make it look different while retaining AwesomeBar functionality" or "disable location bar dropdown altogether" (as replies to this complaint typically suggest). I mean "revert to the previous, predictable, sane functionality."
Currently, I'm approximating the previous functionality with a number of obscure, poorly-documented about:config tweaks, but 1) why should you need to go through that to provide a vague approximation of the user experience you had in the previous version, and 2) it's not perfect; there's still strange behavior.
Sure, have the awesome bar. Sure, make it the default--a lot of people like it. All I want is a checkbox to revert to the previous behavior, that's all.
If you can confirm your vote, you can prove how you voter to others. This makes room for buying and extorting votes! I can imagine some employers requiring you to prove you voted correctly to keep your job.
Or union bosses. Or the local political-organizing group slipping you some money in exchange for voting a certain way. Or even an unorganized gang of thugs trying to intimidate you (think a group of rednecks who suspect you might have voted Democratic, or a group of Berkeley hippies who suspect you might have voted for Prop 8).
But I disagree with your first sentence. It's certainly true about the scheme proposed by GP, but contrary to intuition, there are ways to confirm your vote without being able to prove how you voted to others.
Such voting systems typically use a "cut-and-choose" method in which your vote is split into two or more pieces, any one of which is useless for determining how someone voted, yet together create the full vote. The voter takes a copy of one of the pieces as a receipt and can verify that the piece was counted correctly. So if there are two pieces overall, someone trying to tamper with the votes would have a 50% chance of being caught for each vote tampered with, which quickly becomes negligible for any significant number of votes. Yet the voter can show the piece to others, and it doesn't give any information about how they voted.
The issues with these new systems seem to be usability, inertia, and public trust. Usability: Voting should be extremely simple for the voter. If Great-Grandma can't do it, it's not going to be our voting system.
Inertia: Current election systems seem to be "good enough" for most people; despite some agitated geeks and the occasional news story about voting machines being laughably insecure, there isn't a huge popular movement to change. (Cost of switching systems can also be included here.)
Public trust: Even if cryptographers agree that a system is secure, if the system involves a user experience any different from the familiar "check off from a list of names" protocol, they'll have to work to convince the lay public that it's ok.
Legally, the difference between a bar and a country club is that the bar is what is frequently referred to as a semi-public space. That is, it is private property, but is open to the general public. Restaurants, shops, etc. typically fall into this category.
Owners of semi-public spaces do have some rights to control their property (e.g. enforcing a rule that they'll kick you out of the store if you don't buy anything, or a movie theater not allowing kids into a theater hall that is currently showing an R-rated film). However, they have fewer rights over the property than owners of private spaces do (e.g. they can't prevent someone from entering solely based on their race).
A country club is typically a fully private space--while there are procedures for gaining access, the general public is excluded. A bar is a semi-public space--there is a general expectation that it is "open to the public" (subject to legal age restrictions). Your proposal of "membership" might be seen as an attempt to make a bar a letter-of-the-law private space. IANAL, but I'd expect it to fail in one of two ways:
1) Someone could legitimately argue that the temporary "membership" is basically a farce and the bar is still a semi-public space, since the general public can--and, indeed, is desired to--still access it by gaining trivial membership.
2) There may be zoning restrictions involved. Bars are frequently located in commercial zones; cities may require any businesses operating in the area to be semi-public.
If it happens once, it's a bug. If it happens twice, it's a feature. If it happens more than twice, it's a design philosophy.