Forgot your password?

Comment: Listening to keystrokes + HMM = Profit! (Score 3, Interesting) 244

by Theovon (#47455847) Attached to: German NSA Committee May Turn To Typewriters To Stop Leaks

Passwords have been stolen just by listening to keyboard click noises. Why could a typewriter be any different? A relatively straightforward codebook analysis of keypress noises plus a hidden markov model plus a Viterbi algorithm will allow you calculate the highest probability sequence of letters for a given sequence of sounds and timings between sounds even in German!

Mind you, they have to be able to get a sound bug in there, but that might be malware-infected computers nearby the typewriters.

Anyhow, basically, the technology used to do automatic speech recognition would make short work of tapping typewriters, so they’re fooling themselves if they think this’ll make much difference.

BTW, I have a strong suspicion that the Germans’ outrage is all a big charade. Every major country has big spy operations. The NSA is neither unique nor the first of its kind. The Germans could not have been ignorant of at least the general nature NSA’s dealings before Snowden, so while they openly object, secretly, this is business as usual. By doing this, they fool their people into thinking they’re not being spied on by their own government and, using the US as a scapegoat, they also generate a degree of solidarity. Russians spy operations, of course, are way worse, so their objections are the same bullshit. And the Chinese government is all about lying to, well, basically everyone while they use both capitalism and cyberwarfare to take over the world and control everyone, so their recent statement about the iPhone is also a crock of shit.

This reminds me of Andrew Cuomo’s push to restore trust in government. The whole idea is disingenuous. Governments, like any large organization, are only going to do what the people need only with checks & balances and transparency.

And as a final note, I believe that the stated purpose of the NSA is a good one: Mine publically available data to identify terrorist activity. That sounds like a good thing to do. It’s the illegal violations of privacy that are wrong. They violate our rights because it’s inconvenient to get the info they need some other way. It’s also inconvenient for me to work a regular job instead of selling drugs. There are much more convenient ways to achieve my goals that I avoid because they are wrong. To do their job, the NSA needs to find clever ways to acquire the information they need WITHIN THE LAW.

Comment: Anti-piracy campaigns are highly effective (Score 1) 214

by Theovon (#47449389) Attached to: Economist: File Sharing's Impact On Movies Is Modest At Most

But not for the reason you think.

A question we should be asking ourselves is what impact would piracy have on movie revenue if we’d had higher speed Internet in the days of Napster and Kazaa. We currently live in a culture where even non-technical people know that piracy is a copyright violation. There’s also the looming threat of being sucked up in a dredging operation or having your ISP (or the NSA) volunteer information to the MPAA on your metadata. People don’t avoid filesharing because it’s unethical or illegal. They avoid it because it’s relatively inconvenient (requiring technical knowledge), and they fear excessive penalties if they’re caught.

If pirating movies were as simple as downloaing an app and searching people’s libraries, the amount of piracy would be far greater, and the impact on revenue would more significant.

What’s really curious to me, however, is the amount of time and effort some people spend on this. Personally, I’d rather optimize to reduce how much time I spend on it than try to see how cleverly rebellious can be. If I want to watch a movie that’s currently out on DVD, I have four classes of options:
(1) I could spend about half an hour figuring out which of the numerous available torrents is in a playable format and not a fake and then maybe a couple hours downloading it. If I’m really lucky, I can burn it to a DVD that my player will understand so I don’t have to take the time to connect my laptop to the TV.
(2) I could run down to the nearest RedBox, about 15 minutes round trip, and spend the rest of the time doing some consulting work. Not only would I have a legal copy, but I’d come out ahead financially.
(3) If I have some patience to wait a day, I can order my own copy to keep from Amazon Prime, and I’ll STILL come out ahead financially.
(4) If I’m dead-set on a lengthy download, services like iTunes offer up a wide variety of downloadable media.

I suspect most of us clever enough to avoid getting caught pirating think this way. The legal options are just easier, less costly (time==money), and less risky. Those with the skills already in a minority, so the only people doing any significant amount of piracy are those with both the skillls and nothing better to do than to see how clever they can be at unnecessarily breaking the law.

I encounter that attitude a surprising amount, though, among students. There are people who will spend more time and effort trying to BS their way through an assignment and/or find a way to avoiding the need to do it than would be necessary to actually just do the assignment. Doing the assignment requires learning something new, while all this “clever" avoidance relies on established skills. But I don’t know why these people bothered to go to college if they have no interest in learning the material. I guess they feel pressure culturally or parentally, but I don’t like it when they make it my problem.

Comment: Elites in any field must have some OCD (Score 1) 608

by Theovon (#47416351) Attached to: Normal Humans Effectively Excluded From Developing Software

Those people really far out on the cutting edge of new sciences are successful only because they have some major obsessive qualities. They are driven to learn, understand, and create. They understand things so abstract and esoteric that it would be all but impossible to explain some of these ideas to the lay person. And each of us has some secret weapon too. Mine, for instance is that I can code rings around most other CS professors. I’m not actually smarter than them. Indeed, most of them seem to be better at coming up with better ideas on the first shot. My advantage is that I can filter out more bad ideas faster.

A key important aspect of the areas that we are experts in is that there are underlying and unifying principles. Subatomic particles fit into categories and have expected behaviors that fit a well-tested model. CPU architectures, even exotic ones, share fundamentals of data flow and computation. CS is one of those fields that in invented more than it’s discovered, and as we go along, scientists develop progressively more coherent approaches to solving problems. Out-of-order instruction scheduling algorithms of the past (Tomasulo’s and CCD6600, for instance) have given way to more elegant approaches that solve multiple problems using common mechanisms (e.g. register renaming and reorder buffers). You may think the x86 ISA is a mess, but that’s just a façade over a RISC-like back-end that is finely tuned based on the latest breakthroughs in computer architecture.

Then there’s web programming. Web programming is nothing short of a disaster. There’s a million and one incompatible Javascript toolkits. HTML, CSS, and Javascript are designed by committee so they have gaps and inconsistencies. To write the simplest web site (with any kind of interactivity) requires that one work with 5 different languages at one time (HTML, CSS, Javascript, SQL, and at least one back-end language like PHP), and they’re not even separable; one type of code gets embedded in the other. People develop toolkits to try to hide some of these complexities, but few approach feature-completeness, and it’s basically impossible to combine some of them. Then there’s security. In web programming, the straightforward, intuitive approach is always the wrong approach because it’s full of holes. This is because these tools were not originally developed with security in mind, so you have to jump through hoops to make sure to manually plug them all with a ton of extraneous code. In terms of lines of code, your actual business logic will be small in comparison to all the other cruft you need to make things work properly.

When I work on a hard problem, I strip away the side issues and focus on the core fundamental problem that needs to be solved. With most software, it is possible to break systems down into core components that solve coherent problems well and then combine them into a bigger system. This is not the case with web programming. And this is what makes it inaccessible to “normal people.” The elites in software engineering are the sorts of people who extra grand unifying theories behind problems, solve the esoteric problems, and provide the solutions as tools to other people. Then “normal people” use those tools to get work done. With the current state of the web, this is basically impossible.

Comment: Who are these idiot futurists? (Score 1) 564

MAYBE machine intelligence will surpass humans in some ways, but where the hell do we get this idea that they’ll decide we’re unstable and wipe us out? Sci Fi? Do we get it from anything RATIONAL?

We humans have our emotions from millions of years of evolution in hostile environments on earth. And really, emotions are just low-level intelligence adaptations for detecting and avoiding threats. They’re somewhat vestigial in humans, due to layers of more advanced intelligence on top of earlier developments. With intelligent computers, we’re bypassing all of that and just giving them basic reasoning capabilities over huge data sets. For AI, the evolutionary mutations (i.e. advances in AI algorithms [*]) and selective pressures (which AI algorithms we choose to deploy) are COMPLETELY DIFFERENT from what our ancestors faced.

Computers will not spontaneously develop either intelligence or the kind of moronic reasoning necessary to decide to wipe humans out. To get the latter, we’d need a massive conspiracy of megalomaniac genius experts in artifcial intelligence who intentionally develop malware to infect military robots that go around shooting people. Oh, and we’d need the robots too.

Some people forget that politics (or aspects of it) and paranoia move as fast as technology. Every time some scientific advance occurs, a bunch of ethics people (some sensible, some not so much) pounce on it and pick it to pieces. IBM Watson isn’t capable of the kind of decision making that would obsolete humans, but there are plenty of people who are worried about it and ready to develop all manner of reasonable and assinine regulations.

Bottom line: Intentionally developing or accidentally evolving destructive AIs like this is highly implausble, due to lack of motivation and lack of evolutionary pressure, and those evolutionary pressures that exist are counter to this kind of development as well.

[*] Implementations of AI programs may be done intentionally by humans, but advances in algorithms evolve as memes. Evolutionary steps may often seem intentional, but quite often, they’re the result of a arbitrary combinations of pre-existing ideas in people’s minds, where the cleverness exists mostly in figuring out that these ideas can go together and finding a way to combine them. Technology evolves in the same way that languages do.

Comment: Re:The good Samaritan always gets his ass kicked (Score 1) 160

by Theovon (#47377507) Attached to: Facebook Fallout, Facts and Frenzy

Yes and no. They were testing something that they HYPOTHESIZED could reduce the quality of the user experience. And IIRC, that hypothesis turned out to be wrong (to the extent that one can get that from the statistics).

If all user interface modifications that lead to an improved user experience were intuitive, then Facebook would have implemented them already. They are now at a point where they have to consider things that are NOT intuitive. The idea that filtering other people’s posts in a way that increases their negativity should actually lead to an improvement in user experience is not intuitive. Moreover, their explanation for the result (that people are turned off by their friends doing too well) is conjecture, albeit a reasonable one. Human psychology is complex, and for Facebook to continue to advance their mind job on people, they have to get really clever and do things that aren’t obvious.

Like I say, I feel it was a mistake to not get human subject approval before conducting this research. If this was an analysis of pre-existing data, they wouldn’t need approval. If Facebook had done this unilaterally, they souldn’t need approval. But in the specific circulstances, the law is somewhat ambiguous on this point.

You’ll notice that I referred to Facebook as a mind job. I really don’t like it. I have an account, but I seldom use it. However, that doesn’t mean I can’t try to be objective about this research experiment.

Comment: Re:The good Samaritan always gets his ass kicked (Score 1) 160

by Theovon (#47375691) Attached to: Facebook Fallout, Facts and Frenzy

The requirement for informed consent was ambiguous in this case. If I had been in their position, I would have erred on the side of caution, and the research faculty who consulted on this project should have been more resolute about it. If anything, it is those people who should have done the paperwork. I think their failure to get informed consent was a mistake, but I don’t believe it was any kind of major ethical violation. It does no harm to get informed consent, even if you don’t legally need it, and there are moral arguments for getting it in any case.

My main point is that this kind of “manipulation” has been going on for a long time and will continue to occur. Facebook intentionally manipulates users in all sorts of ways to determine what will affect what gets users to use their service and click ads. The only practical difference between this current intentional manipulation and past intentional manipulation is that this time, they reported on it. Going forward, they continue to not get informed consent (because they don’t need it), but they will also continue to manipulate. Thus the travesty is that will simply stop reporting their findings in the future, and that is the ONLY thing that will change, and the rest of the world will be less informed because of it.

Comment: The good Samaritan always gets his ass kicked (Score 4, Insightful) 160

by Theovon (#47375463) Attached to: Facebook Fallout, Facts and Frenzy

As has been pointed out many times, Facebook was doing their usual sort of product testing. They actively optimize the user experience to keep people using their product (and, more importantly, clicking ads). The only difference between this time and all the other times was that they published their results. This was a good thing, because it introduced new and interesting scientific knowledge.

Because of this debacle, Facebook (and just about every other company) will never again make the mistake of sharing new knowledge with the scientific community. This is truly a dark day for science.

Ferengi rule of aquisition #285: No good deed ever goes unpunished.

Comment: Re:CMOS scaling limited by process variation (Score 1) 142

by Theovon (#47358713) Attached to: Will 7nm and 5nm CPU Process Tech Really Happen?

I don’t know the principles behind how doping concentrations are chosen, but I’m sure it’s optimized for speed. Also, you can compensate for Vth variaton using body bias, but it’s basically impossible to do this per-transistor. You can do it for large blocks of transistors, which allows you to compensate a bit for systematic variation (due mostly to optical aberrations in lithography), but there’s nothing you can do about random variation. Also, there’s effective length variation, which I don’t think you can compensate for using body bias.

Comment: CMOS scaling limited by process variation (Score 4, Interesting) 142

by Theovon (#47283195) Attached to: Will 7nm and 5nm CPU Process Tech Really Happen?

There are a number of factors that affect the value of technology scaling. One major one is the increase in power density due to the end of supply and threshold voltage scaling. But one factor that some people miss is process variation (random dopant fluctuation, gate length and wire width variability, etc.).

Using some data from ITRS and some of my own extrapoliations from historical data, I tried to work out when process variation alone would make further scaling ineffective. Basically, when you scale down, you get a speed and power advantage (per gate), but process variation makes circuit delay less predictable, so we have to add a guard band. At what point will the decrease in average delay become equal to the increase in guard band?

It turns out to be at exactly 5nm. The “disappointing” aspect of this (for me) is that 5nm was already believed to be the end of CMOS scaling before I did the calculation. :)

Incidentally, if you multiply out the guard bands already applied for process variation, supply voltage variation, aging, and temperature variation, we find that for an Ivy Bridge processor, about 70% of the energy going in is “wasted” on guard bands. In other words, if we could eliminate those safety margins, the processor would use 1/3.5 as much energy for the same performance or run 2.5 times faster in the same power envelope. Of course, we can’t eliminate all of them, but some factors, like temperature, change so slowly that you can shrink the safety margin by making it dynamic.

Comment: Re:Is it me or... (Score 1) 85

by Theovon (#47270655) Attached to: The Game Theory of Life

The simulated universe hypothesis is based on the seeming odd coincidence that our universe’s operation looks identical to information theory.

The problem with that hypothesis is that people seem to forget that our concept of information theory is a function of the universe it was developed in. Thus, it’s no coincidence, and the congruence of physics and information theory is not evidence of simulation.

Comment: Diversity of applicants? (Score 1) 435

by Theovon (#47263193) Attached to: Yahoo's Diversity Record Is Almost As Bad As Google's

What I want to know is, what kind of applicant pool do these companies have. If their hiring diversity is the same as their applicant pool, there’s not all that much they can do except maybe try harder to recruit in communities with higher proportions of minorities. If the minority applicants that they get aren’t as well qualified (objectively), we shouldn’t encourage them to hire less qualified applicants. Anything else would be reverse discrimination, which would also be wrong.

Maintaining higher diversity avoids a monoculture and increases the diversity of thought, which is good for problem solving. But you can’t squeeze blood from a stone. (Well, except for blood stones.)

Comment: Re:Compartmentalization and ethics (Score 1) 220

by Theovon (#47202847) Attached to: NSF Researcher Suspended For Mining Bitcoin

They can be compared in that there are ethical considerations in both cases. As I said, abusing the supercomputer is a much more extreme case. In many ways, my examples are victimless crimes, while the supercomputer case had a far more tangible impact. In a relative moral scale, the supercomuting case was much more severe and would therefore have a more severe penalty. My whole point, I guess, is that even victimless crimes are cases where an ethical person should think twice before taking action.

I do feel compelled to point out that “victimless crime” is a loaded term that is abused by some people who want to poke their noses where they don’t belong. Some people would, for instance, want to say that growing your own weed and smoking it is a “victimless crime.” I’m not sure what the current laws are, but since this doesn’t involve interstate commerce, it’s not a crime at all, and it might be called a crime in the first place only because someone objects in general to smoking pot. Even if it were a crime, technically, I think the ethics in this case depend on the broader impact. If smoking pot improves your over-all wellbeing and doesn’t negatively impact your functioning in society, then it’s probably a good thing. If you’re neglecting your kids because of it, then it’s wrong. The ethical failure, however, is not in smoking pot but in failing to moderate the impact of your choices — it’s just as bad to neglect your kids playing video games online.

In the case of abusing equipment that was bought by tax-payer money, even if it’s “victimless,” it’s still unethical because you’re acting beyond your rights with respect to this asset that you have been trusted wtih. In other words, the ethical failure is not in the use of the equipment, per se, but rather in a breach of trust with respect to how you’re expected to use it. It’s one thing to borrow the company truck to go grab lunch. It’s entirely another to borrow company A’s truck to go do consulting work for company B, even if you’re not on company A’s clock at the time and you use your own money to fill the gas tank.

Comment: Compartmentalization and ethics (Score 1) 220

by Theovon (#47202225) Attached to: NSF Researcher Suspended For Mining Bitcoin

The abuse of the supercomputer is an extreme case. But there are other less clear-cut areas. For instance:

- What if I bring my own computer to the university and use their electricity to generate bitcoins?
- What if I bring university-owned equipment (that I have control over) home and use it to mine bitcoins on my electricity?

In either case, something that doesn’t really belong to me (even if I’m in charge of it and have the right to relocate) is being used for profit in a way that is (a) most likely against policy, and (b) not ethical in the first place.

The latter category is the really tempting one. Nobody would catch me, because all the network traffic and electricty usage is at my own home. Any impact on the longevity of the equiment is moot because it would probably go obsolete long before it suffered hardware failure. And of course, I can claim that I’m taking it home for official purposes (nobody would question me anyhow). This is one of those cases where you have to let your sense of right and wrong take precedence over the fact that you're clever enoug to not get caught.

Optimism is the content of small men in high places. -- F. Scott Fitzgerald, "The Crack Up"