Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Nerd-rant deconstruction (Score 4, Insightful) 525

I know it's a sign of weakness to do a line-by-line rebuttal of flamebait, but TFA is seriously pissing me off.

Policy makers ... knew that music sales in the United States are less than half of what they were in 1999, when the file-sharing site Napster emerged, and that direct employment in the industry had fallen by more than half since then, to less than 10,000.

These statements are not backed up. Given the industry's history of exaggerating their claims, I put the onus on them to prove that these numbers are in any way correct.

Consider, for example, the claim that SOPA and PIPA were “censorship,” a loaded and inflammatory term designed to evoke images of crackdowns on pro-democracy Web sites by China or Iran.

Yet the author's use of "theft" and "piracy" are totally neutral, without any intent to evoke particular emotions in the readership?

When the police close down a store fencing stolen goods, it isn’t censorship, but when those stolen goods are fenced online, it is?

This is being purposefully obtuse. The claims of 'censorship' were about collatoral damage: that the laws would have a chilling effect and would be open to abuse. No one was directly equating "shutting down online counterfitting sites" with censorship. (Although, of course, the difference between shutting down a physical store and an online presence is indeed that the Internet is all about communication/data-transfer, and curtailing communication is essentially censorship.)

They also argued misleadingly that the bills would have required Web sites to “monitor” what their users upload, conveniently ignoring provisions like the “No Duty to Monitor” section.

This is an interesting claim. But if the author is sure that the "No Duty to Monitor" section protects conveyors of content, then why not spell that argument out in detail? Why not quote from the bill, and explain how this protection works? That is the very crux of the disagreement, it would seem, yet the author just mentions it in passing.

Apparently, Wikipedia and Google don’t recognize the ethical boundary between the neutral reporting of information and the presentation of editorial opinion as fact.

This is perhaps the only valid point in the entire piece. It is true that Wikipedia and Google (in very different ways) strive for some measure of neutral transmission of information. I can see how one could argue that using their position as trusted sources of information to spread their own viewpoint is an abuse. However:
1. This is begging the question, by assuming that what Wikipedia and Google were reporting was incorrect. But that is precisely what the debate is about: is it true that SOPA/PIPA would lead to collatoral censorship? If the claim is true (and as far as I can tell, it is), then Wikipedia spreading that information was just another manifestation of them spreading truthful statements.
2. These entities do have a right to let their opinion be known.
3. The opinion piece provides no reason why these companies would be misinforming the populace. What is it they hope to get out of it? Their stated reason is simple: that they wanted to stop the legislation because they couldn't continue operating under the legislation. The author provides no evidence, not even spurious reasoning in fact, for any other motivation. So, one could accuse them of being mistaken, but to accuse them of pushing an ideology is wrongheaded.

“old media” draws a line between “news” and “editorial.”

This is laughable. Mainstream media has a well-documented history of injecting bias into their reporting (everything from their selection of what to cover, to how events are described, to thinly-veiled editorials/opinions masquerading as 'balanced reporting').

The violation of neutrality is a patent hypocrisy: these companies have long argued that Internet service providers (telecommunications and cable companies) had to be regulated under the doctrine of “net neutrality” ...

This is a red herring of the highest order. The debate about net neutrality is about a very specific kind of neutrality.

And how many of those e-mails were from the same people who attacked the Web sites of the Department of Justice, the Motion Picture Association of America, my organization and others as retribution for the seizure of Megaupload, an international digital piracy operation? Indeed, it’s hackers like the group Anonymous that engage in real censorship when they stifle the speech of those with whom they disagree.

I see. Equating the massive outpouring of opinion with a minority of people who engage in illegal hacking (I'm surprised he didn't pull out the "terrorist" card). He can't fathom that the public actually believes what they are saying. He is certain that they are either misled or criminals (possibly both).

Perhaps this is naïve, but I’d like to believe that the companies that opposed SOPA and PIPA will now feel some responsibility to help come up with constructive alternatives. ... The diversionary bill that they drafted, the OPEN Act, would do little to stop the illegal behavior and would not establish a workable framework, standards or remedies.

So in one sentence he bemoans that the opposition is not coming up with any alternatives, and two sentences later mentions offhand that the opposition has, indeed, suggested an alternative. But he doesn't like that alternative. Moreover he calls that alternative 'diversionary'. As we can see, he is certainly above using "loaded and inflammatory terms" to make his point.

We all share the goal of a safe and legal Internet. We need reason, not rhetoric, in discussing how to achieve it.

The irony of course is that the original legislation was being pushed through without any public discourse. They wanted it to happen without the input of a myriad of stakeholders (like, the public). Only now that the entire process has been laid bare do they call for reasoned discussion. Again, his entire essay is incredulous that the public has the audacity to disagree with his plan. He is annoyed not so much with what Google and Wikipedia's opinions are, but that they brought this debate to the people... and that in an open, reasoned debate, his extremist plans cannot survive for long.

Comment Re:What's the point of journals? (Score 3, Interesting) 206

Transparency is generally a good thing, and I agree that many aspects of the publishing process are needlessly opaque. This should be fixed. But anonymous peer review has certain advantages. It provides an opportunity for reviewers to be completely honest. Think about a junior scientist reviewing a paper by a more well-established peer: they may fear that a critical review will seriously hurt their career. Think about scientists not wanting to be critical in a review of a friend's paper, or conversely people punishing papers because 'they rejected my last paper!' And so on. The journal editor serves the role of maintaining the anonymous peer review system. (Note that in anonymous peer review, the reviewer is still free to disclose their identity by signing their review; and indeed many scientists do this.)

Of course there could be ways to do anonymous peer-review in an open forum system (e.g. using trusted editor-like intermediaries, or using verifiable keys that can establish trust without disclosing who posted the review). It could be done; in fact nothing prevents all of this from happening right now (even now, authors could individually post their rejected articles, including all peer-review and editor comments, to their institutional websites; this at least partially happens through arXiv).

My point about efficiency was that for a given final state X, we can either tweak our current journal model until it reaches X, or we can start from scratch building a new initially inefficient system A, and then tweak that until we reach X. Both will have serious growing pains, but it seems to me that it will be easier (in particular, easier to get scientists on-board with the changes) by smoothly transitioning from the current system to the final desired state of X. Doing it smoothly means no downtime; each adjustment can be tested and the community can decide whether they like the change. So, again, I agree that there are many things about the journal system that could be fixed, and which modern Internet technology can help fix (open access, transparency, better logging of opinions/comments/etc., allowing any scientists to comment on any article, creating a space for public debates/discussions, etc.). I just think that the most kinetically favorable path to that new state is a series of changes from the current journal system (for all it's faults, the community is doing a lot of great science these days!).

Comment Re:What's the point of journals? (Score 3, Informative) 206

The problem with a free forum is signal to noise. It would have to have some kind of reputation system, such as scientists rating/flagging each other's contributions. That way, you could add some respected scientists to your 'trusted' list, and things that they trust would be highlighted/promoted to you. Essentially a web of trust model. This has obvious downsides, such as scalability and the inherent formation of cliques and the like.

The thing is that journals are actually a decent solution to these issues. They curate content on your behalf, and you decide which journals are more reputable than others. By doing some of the leg-work for you, they handle scalability and make the format relatively open to all comers. They also have the advantage of already existing: scientists already know which journals are better than others, understand the process of submitting to journals, and so on...

My point is that while you could entirely ditch the journals, and build a whole new system... this would be inefficient. It would seem simpler to take the current journal system, and just fix the things that are wrong with it (in particular, the exorbitant costs and the lack of open access). On the one hand, you may say it's hopelessly idealistic of me to expect for-profit journals to willingly move towards a more open format. On the other hand, there are already highly successful open-access journal ventures (e.g. PLoS), which are indeed pushing the journal system towards open access. So there is hope that we can reform the journal system.

Comment Re:Innovation (Score 5, Interesting) 449

Agreed. It's fashionable to decry any new UI ideas as stupid. And indeed many UI redesigns are a step backwards, or purely aesthetic, or confusing, ... I'm not a fan of Unity, for instance. But we have to be at least somewhat open to new UI ideas, or computer interaction will never move forward.

This particular idea seems really good to me. In fact it's something I've been wanting for a long time. There have been small pushes in this direction (e.g. the Ubiquity add-on for Firefox would let you type commands (like "map XXX" or "email page to XXX") and get immediately useful results), but for it to really work, from a user perspective, it has to be available in every application so that it's worth the cost to learn the new style.

Being able to search the menu structure is really powerful, especially for applications with loads of commands (photo editors, word processors, etc.). I've lost count of the amount of time I've wasted searching through menus for a command that I use infrequently. I know it exists, I've used it before... but does it count as a "Filter" or an "Adjustment" or an "Edit"? Why can't I just search for it? Moreover, I shouldn't have to train myself to remember where it was put. Once you get used to typing commands, it can be extremely fast to do so, becoming almost as fast as a keyboard shortcut. (Obviously this will be more the case in applications where your hands are already on the keyboard, like word processors; it could be slow in applications like photo-editing where your hand is usually on the mouse...)

The ability to rapidly invoke commands via the keyboard is something that I would think most slashdotters would love: it adds back in some of the power of the commandline. It also inherently streamlines across applications (you should be able to just type "Save" or "Preferences" in any application and get the expected behavior, regardless of where they put the menu item. If they're smart, they'll kind synonyms, so that "Options" and "Preferences" map to each other...)

While I am excited about all this, they do need to leave, in my opinion, the usual menu bar accessible and visible. The reason is simple: during the initial learning phase of an application, you don't even know what's possible. You need some way to explore the available commands, see what the app can do, and experiment. Only once you're somewhat familiar with the application does it make sense to quickly invoke commands with the keyboard.

Comment Re:Email is private? (Score 3, Insightful) 533

That's kinda silly. If I have a phone conversation in an empty room of a friend's house, then according to you it's not a private communication because I'm having it in a room controlled by someone else, and they could have bugged the room? Or if I write a personal letter in my office at work, it's not private because my employer may have installed a secret monitoring camera?

The fact is that there are social conventions afoot: for example that my friends don't bug their houses and that my employer hasn't installed secret cameras (some of these conventions are in fact backed-up by laws). As such, even though someone ~could~ intercept my communication, it is presumptively private and people who circumvented that would be accused of violating my privacy.

Similarly with networks. It's certainly possible for my friend to keylog their computer, or make copies of all traffic that passes through their router. But most sensible people would assume that this is not happening, and that doing so would be an invasion of the privacy of others.

So, email is private. That doesn't mean it's un-interceptable (neither is postal mail: it's trivial to grab someone else's mail and read it). But those who intercept it are violating privacy. (Of course if privacy is important to you, then you should take extra steps (e.g. encryption). But communications that you target towards a specific person are presumptively private.)

Comment Children acting childish... (Score 5, Insightful) 533

Giving out your password as a demonstration of trust is just silly. I trust my boss with work-related things, but that doesn't mean I give him the passwords to all the servers at work. Why? He doesn't need them. I trust my mom, but I don't give her my bank PIN. Why? She doesn't need it. I trust my girlfriend but I don't give her my gmail password. Why? Because she has no use for it. The difference between strangers and people I trust is that I ~would~ give friends/family secret credentials, if there was a valid need (e.g. I was sick and needed my girlfriend to perform a financial transaction for me). But giving out the details just for fun is illogical, and insecure.

Moreover, it's more a manifestation of a lack of trust. I don't care that I don't know my girlfriend's Facebook password... because I trust her. The only boyfriends/girlfriends who want each other's passwords are those who don't trust each other: they want to check up on what the other one is posting/saying. They don't trust them enough to let them have privacy or private conversations. I've seen this happen (my sister once had a jealous boyfriend who thought she was cheating on him and thus demanded access to her email and Facebook passwords so that he could check for himself... the relationship did not last).

Overall, this whole "if you loved me you'd give me your password" is infantile. The appropriate response is: "If you respected me you wouldn't ask for it."

Comment Re:This again? (Score 1) 589

In the abstract, you're right that we can't know a priori what the 'fair' distribution should look like. If everyone had equal opportunities to pursue their interests, free from any bias or pressure (either pressure to join or pressure to stay out), then the distribution within a sub-field could match the population at large (if interest in the sub-field is uncorrelated to the distribution parameter in question (ethnicity, gender, etc.)), or the distribution within a sub-field could be different from the population at large (if interest in the sub-field is somehow correlated to some parameter). An obvious example is age: the distribution of ages within a particular job do not match society at large. Nor would we expect them to: the ability and interests of people vary strongly with age.

Now that we've gotten that abstractness out of the way, let's consider the real world. In the general case answering these kinds of questions can be quite difficult, as bias can be stubbornly difficult to identify and quantify. On top of that, people's upbringing and sociological context will of course affect their opinions and interests. Is it enough for us to just provide equal opportunities? Or should it be a social goal to in particular reach out to people who, for whatever reason, don't realize that they would enjoy and be good at a work in a particular field?

But we can get more concrete still. Is the distribution of women in programming reflective of their true interest or is it somehow skewed by sociological biases/pressures/etc.? Well, here there is actually plenty of evidence. Firstly, that the gender differences in capabilities are either small or non-existent. The gender difference in, say, math skills between men and women are very small: far smaller than the distribution within the male population or female population, for instance. (This has been borne out in many studies.) Also, the gender differences are well-known to be varying as a function of time (getting smaller as social equality gets better), and to vary as a function of location and social context, suggesting that what differences are seen are due to environment and not due to intrinsic differences in capabilities. Taken together, all of this suggests that if we're just talking about capabilities, the distribution should be much closer to 50:50 than it is today.

Additionally, there is no lack of evidence for sexism still existing, and in particular still existing in high-education fields like math, science, and programming. So while it is difficult to say what the 'fair' distribution of men:women should be in the field of programming, we can be fairly sure that it is currently strongly skewed by the considerable sexism known to exist.

My point is that while it can be difficult in the really know what the fair distribution would look like (where everyone is making free, informed, unpressured choices)... we are demonstrably not yet at that point. There is still plenty of overt bias, sexism, and hostility. So until we have done a much better job of leveling the playing field, we shouldn't get sidetracked by esoteric 'ideal distribution' questions.

Comment Re:Do no evil indeed (Score 5, Insightful) 383

You're absolutely right. If the allegations are true, then Google is at fault and should be taken to task for this.

However, when things like this happen, it's usually worthwhile to figure out whether the bad behavior was isolated to a single person, a single department, a single branch, or whether it's a common part of the company's internal culture, or even a company-wide policy. The point being that if we can reliably determine that it was a small subset of the company behaving badly, and the company removes the offending parties, then you can reasonably keep interacting with the company (albeit with more vigilance than you were before). If, on the other hand, it's clear that this was part of a company-wide pattern, then you should reasonably stop trusting the company as a whole.

To be clear: it's not a matter of absolving the parent company from responsibility (they are indeed responsible for everything their subsidiaries and employees do). It's about coming up with valid predictions about how likely this company is to be a repeat offender.

Comment Re:Bad article (Score 4, Informative) 135

Unrelatedly: have they/will they publish a paper on this? I can't find anything mentioning a paper in the press releases.

The actual paper was published today in Science:
Sebastian Loth[1,2], Susanne Baumann[1,3], Christopher P. Lutz[1], D. M. Eigler[1], Andreas J. Heinrich[1] (Affiliations: [1] IBM Almaden Research Division, [2] Max Planck Institute, [3] University of Basel) Bistability in Atomic-Scale Antiferromagnets Science 13 January 2012: Vol. 335 no. 6065 pp. 196-199 DOI: 10.1126/science.1214131.

The abstract is:

Control of magnetism on the atomic scale is becoming essential as data storage devices are miniaturized. We show that antiferromagnetic nanostructures, composed of just a few Fe atoms on a surface, exhibit two magnetic states, the Néel states, that are stable for hours at low temperature. For the smallest structures, we observed transitions between Néel states due to quantum tunneling of magnetization. We sensed the magnetic states of the designed structures using spin-polarized tunneling and switched between them electrically with nanosecond speed. Tailoring the properties of neighboring antiferromagnetic nanostructures enables a low-temperature demonstration of dense nonvolatile storage of information.

Some big names are on this paper (Don Eigler is a pioneer of STM; responsible for the famous "IBM written with xenon atoms" proof-of-concept, and along with Lutz worked on the also-famous "quantum corrals").

Comment Re:Vibration will be the biggest challenge (Score 3, Insightful) 135

You're right that for STMs and AFMs instruments, vibration is a huge issue. But when using those instruments, you're trying to image nano-sized objects, or even individual atoms. So of course vibrations bigger than an atom's width will ruin your image. You can compensate for this (to a point) by making the device more rigid, and also by dampening out environmental noise. But there's a limit to what you can do (e.g. you can't make the cantilever your tip is attached to very stiff, or you would ruin your sensitivity).

In an atomic magnetic memory, though, you wouldn't really be imaging individual atoms. You'd be scanning the tip back-and-forth and trying to sense (or set) the local magnetic field. Thus you wouldn't need to use a soft cantilever to hold the tip. A very stiff/rigid one would be fine, as long as it is correctly positioned in relation to the encoding atoms (close enough for sensing, etc.). The magnetic response in general will be stronger than the usual imaging modes for STM.

My point is just that using a STM-like device for storing/retrieving data eliminates many of the design constraints that a full-blown STM needs (because it's trying to do precise topography and density-of-states imaging...). You can play many engineering tricks that they can't afford to do in a real STM.

Having said that, many challenges would remain. External vibrations could still make the device unstable (or require it to sample for longer periods to average-out signals, thus making data throughput lower). Temperature stability is probably going to be a major concern (thermal expansion will change the nano-sized gap between the tip and bits, which will need to be compensate for; thermal noise could overwhelm the signal entirely; thermal gradients could make alignment of the tips and compensation for temperature drift even harder; etc.).

Then again, you only have to look at the absurd sophistication of modern HDDs or CPUs to be convinced that we can handle these kinds of challenging engineering problems (if there is enough economic incentive).

Comment Re:I think 12 atoms should be enough for everyone (Score 5, Informative) 135

Is anyone aware of how "big" they are

An actual STM instrument is pretty big. About the size of, say, a mini-fridge. But the majority of that is the computer to drive the system, the readout electronics, and the enclosure (to dampen out vibrations, establish vacuum, etc.). The actual readout tip is pretty small: a nano-sized tip attached to ~100 micron 'diving board' assembly.

A related problem with STM is that it's a serial process: you have a small tip that you're scanning over a surface. This makes readout slow. However in a separate project, IBM (and others) has been working on how to solve that: the idea is to use a huge array of tips that scan the surface in parallel (IBM calls it millipede memory). This makes access faster since you can basically stripe the data and read/write in parallel, and it makes random seeks faster since you don't have to move the tip array as far to get to the data you want. It increases complexity, of course, but modern nano-lithography is certainly up to the task of creating arrays of hundreds of thousands of micron-sized tips with associated electronics.

Using tip arrays would make the read/write parts more compact (as compared to having separate parallel STMs, I mean). The enclosure and driving electronics could certainly be miniaturized if there were economic incentive to do so. There's no physical barrier preventing these kinds of machines from being substantially micronized. As others have pointed out, the first magnetic disk read/write systems were rather bulk, and now hard drives can fit in your pocket. It's possible the same thing could happen here. Having said that, current data storage techniques have a huge head-start, so for something like this to catch up to the point where consumers will want to buy it may take some time.

Comment Re:Google versus Apple (Score 5, Insightful) 360

Agreed.

To amplify this 'uncanny-valley' notion. The problem with the anthropomorphizing ('attitude') approach is that it lulls the user into thinking they are dealing with a very sophisticated (sentient) system. This fiction quickly disappears once the user runs requests that the AI quite obviously doesn't understand. At that point, the quirky personality becomes annoying (think Clippy), and the fact that it pretends to be as smart as a human, without actually being as smart as a human, makes the interface seem broken and comically insufficient.

The opposite approach, also seen in robotics and many other areas of AI (e.g. search), is to not pretend that the system is like a person. Instead, make it obvious that it is a machine, with a set input/output behavior. Users can then quickly learn how to best use this machine to accomplish tasks. If the shortcomings of the system are evident, users will not be surprised by them and will instead build these into their mental model of how the system works.

As a case study, consider the similar criticisms that have been made about Wolfram-Alpha (e.g. here): essentially, W|A is a highly sophisticated set of computation and relation engines. However it's all wrapped up inside an overly simplistic UI (a single text-entry box, without any obvious way to refine what you mean). This leads to people getting all kinds of unintended results, despite the fact that the system actually can perform the computation/analysis/lookup the user wants. It's just that there is no obvious way to tell it what lookup you meant. The overly-simplified UI implies to the user that the system will just 'figure out what you mean', but the fact is it fails to do that very frequently; the user becomes frustrated because they then have to mentally reverse-engineer W|A's parsing logic, trying to build a query that returns the kind of results they want.

In short, it's better to design a UI that is an honest reflection of the sophistication/power of the underlying technology. To do otherwise creates a bad user experience, because user expectations are not meant by available functionality.

Comment Re:It'll still be spam to me (Score 1) 219

To play devil's advocate here: What if the personalization did include elements such as whether or not you're in the market for something? What if the personalization were tuned to each person's 'spam tolerance' so that the number, type, and content of the emails were below your threshold for annoyance?

Imagine your phone breaks, and then you sit down and your computer and already there is an email along the lines of: "These are the current best smartphones that match your desires and budget. Here are links to reviews for these phones (at sites you trust). Here are links to buy any of these, if you are interested." Or, a month before christmas, you receive an email like "Your sister would probably like the following items for Christmas. If you buy them soon, you can get better rates and they'll arrive in time for the holidays." Or you get an email like "You were interested in buying a bigger TV a month ago, but they were all too expensive. However a recent sale has the TV you like at the price you were willing to pay. Click here to buy it on Amazon (which currently has the lowest price for this item)." And so on...

In other words, imagine if the advert emails were actually useful to you. So useful, in fact, that they offset the annoyance of getting an 'out of the blue' email. If advertising emails were really that tailored, people would probably read them, and click on the links. Heck, people might even actively sign up for (even pay for!) such tailored shopping advice.

Having said all that, I agree that this kind of advertising would be fundamentally creepy and unsettling. It would very pointedly highlight just how much information companies have on us. (How did they know my phone just broke? How did they know I wanted to buy a new TV?) Creepy as it is, however, the cynic in me says that the majority of people would eventually get used to it. The main reason it won't work, actually, is because companies don't have the self-control necessary to pull it off. They will use any opportunity to mislead, lie, and annoy, as long as it gives them (or they think it gives them) a slight edge. With thousands of companies trying to out-yell each other to catch our attention, it inevitably becomes annoying. Which means that no matter how good those emails might be, we will still be aggressively spam-blocking them, and won't trust any of them.

Comment Re:some perspective (Score 3, Insightful) 312

It also depends what you mean by "belongings", though. Some people will interpret it to be all the "stuff" they own (clothes, computers, furniture, etc.). But my car is also a "belonging" and if I include it in the calculation, it accounts for a large fraction by volume. (Of course, if I'm allowed to store other stuff inside the car for the purposes of computing total volume, that changes things... That can be done without breaking anything, but somehow seems like it's violating the premise of the question, which is asking how much space all of your stuff normally takes up.)

Also, for those people who own houses, or plots of land, that would substantially increase the size of their belongings.

Point being that when interpreting the spread of answers, you have to account for the variation in how people interpret the question. (Note that I'm not complaining about "lack of options" or "lack of precision" in a Slashdot poll. Actually, one of the things I like about Slashdot polls is the analysis that goes on in the comments about how fundamentally unclear the question is. Slashdotters are probably more pedantic and detail-oriented than most people, but it's still a useful exercise... reminding us to be weary of the results of surveys, for instance, since how the question is worded, and interpreted by respondents, can massively affect the distribution of answers, and thus the analysis of the data.)

Comment Re:Confusing positions (Score 3, Informative) 477

Well there is a diversity of opinion on Slashdot, so you're inherently building a strawman, here.

Nevertheless, it's perfectly consistent to be pro-net-neutrality and anti-SOPA. The underlying principle here is to maintain equal access to communication technology, in particular to not allow consolidate power bases (in particular, corporations) to control the flow of information. The purpose of net neutrality is to force companies to not discriminate between information seekers and providers; this maximizes the amount of information everyone can easily access. The purpose of striking down SOPA is to prevent companies from having yet more legal power to issue takedowns, censor material, and discriminate between information seekers and provides; preventing SOPA from being passed also maximizes the amount of information everyone can easily access.

Your strawman was implicitly painting this as a debate about whether regulation is good or bad. But that's incorrect. The question is not whether we should have laws. The question is what laws.

Slashdot Top Deals

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...