Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Android

Submission + - It's time to start paying for Android updates (extremetech.com)

MrSeb writes: "As the days and weeks continue to flow by like a lazy river, Android 4.0 Ice Cream Sandwich (ICS) is still stuck someplace upstream from the vast majority of users. The newest version of Google’s platform was first released back in November of 2011, and there are still only a handful of devices outside the flagship Galaxy Nexus that run it. Unlike some past updates, this one is a real departure for Android. The user interface has been totally revamped, the stock apps are better than ever, and system-level hardware acceleration is finally available. It’s no secret that the update system for Android is a mess of monumental proportions. Not even Google’s efforts at I/O 2011 produced any concrete solutions. Many users waited the better part of a year for Gingerbread updates on their devices, and still others got no Gingerbread at all. With ICS being as important as it is, it’s time to talk about a radical step to make updates work — it’s time to pay for them."

Comment Re:Sure they can (Score 3, Insightful) 630

The other thing is that many of us on /. may not quite grasp how normal people use computers, and how much simpler something like live tiles could be. How many computers do you see that have a desktop full of icons, people who can't manage simple things like bookmarks etc.

I see what you're saying, but I think Windows 8/Metro is a failure in this regard, mainly because Microsoft didn't go "whole hog" with this new design ethos. If you think of an iPad, it really does reduce complexity for the end user, by getting rid of so many of the things that a normal desktop computer does. This is somewhat annoying if you're trying to do something more complicated, but it does indeed simplify the computing experience for many people.

But in Windows 8, it seems that you have all the usual complexity of the conventional desktop, plus this new Metro thing. So now your average user not only has to manage all the files on the hard-drive, and all the icons on their desktop, and all the windows in the usual desktop/window interface... they additionally have to figure out and manage live tiles. Worse of all, they now have two competing metaphors: desktop windows and live tiles, which sometimes work together, sometimes duplicate functionality, and sometimes are totally distinct ("I remember being able to make this work... but was it a Metro app or a regular desktop app I did it in?").

One of the most basic principles in UI design is consistency. Being consistent lets users develop muscle memory, simplifies their mental model for the computer, and lets them predict the behavior of new, unfamiliar software. Being a slave to consistency can be bad (and stifle innovation), but conversely if you break consistency you need to have a really good reason: the gain in productivity or power must be sufficient to offset the user confusion. (This is at least one reason that we stick with so many arbitrary conventions in our computers: they may not be the best conventions but by being consistent people can at least learn them.)

Windows 8/Metro breaks consistency in a major way. Not just in breaking with tradition (which can be justified if the new interface is sufficiently better), but by having internal inconsistency between the two competing UI metaphors. By not being committing to one or the other, MS is making both of them more confusing.

You may argue that novice users will just stick to the simplicity of Metro, and never be bothered by the complexity of the traditional desktop (which will be available for power users that need it)... but I am unconvinced to say the least. Legacy software will jolt the user back into the desktop. Even novice users have probably used a conventional desktop and will try to get back into it. Metro in general does not appear to reproduce all the functionality of the conventional desktop. So users will now have to flip between the two different modes all the time. In fact some have also argued the opposite: that novice users will stick to the desktop and ignore Metro (or just use it as a fancy app launcher). This still adds needless complexity. Either way, this is a UI disaster.

It's been said so many times that it's almost pointless to say it again: Metro looks like a very nice UI solution for mobile and tablets. But whoever thought it was the future of desktop computing needs to have their head examined.

Comment Re:Torture (Score 4, Interesting) 357

There's that. There's also the fact that these non-lethal weapons are intended to be used against someone who is being violent: in other words, they are a last resort to subdue someone out of control before they do serious harm to someone, whether that be another citizen (either protestor or bystander), a police officer, or even the person hurting themselves. The purpose in using a non-lethal weapon is that in doing this harm to them, you will prevent a much greater harm.

Which, really, highlights how inappropriately all these non-lethal weapons and anti-riot instruments are used nowadays. They've gone from 'preventing imminent violence and harm' to 'making someone unstable easier to deal with' to 'a way to subdue someone, no different from handcuffing them really'. It's positively criminal and evil how thoughtlessly devices like tasers, rubber bullets, and mace are used nowadays by law enforcement. These things were designed as last resorts and are now being used routinely. If a person is being disruptive but there is no imminent threat of harm, then these tools should not be used. Even if the person has clearly broken a law and needs to be arrested, these tools should be avoided: the person should be subdued peacefully somehow (sometimes this means just waiting, letting them yell and whatnot, until they tire themselves out and can be safely arrested).

Comment Re:No headache? (Score 4, Informative) 52

For those will access, here's the actual scientific article:
Alexander M. Stolyarov, Lei Wei, Ofer Shapira, Fabien Sorin, Song L. Chua, John D. Joannopoulos & Yoel Fink Microfluidic directional emission control of an azimuthally polarized radial fibre laser, Nature Photonics 2012 doi: 10.1038/nphoton.2012.24

Here is the abstract:

Lasers with cylindrically symmetric polarization states are predominantly based on whispering-gallery modes, characterized by high angular momentum and dominated by azimuthal emission. Here, a zero-angular-momentum laser with purely radial emission is demonstrated. An axially invariant, cylindrical photonic-bandgap fibre cavity8 filled with a microfluidic gain medium plug is axially pumped, resulting in a unique radiating field pattern characterized by cylindrical symmetry and a fixed polarization pointed in the azimuthal direction. Encircling the fibre core is an array of electrically contacted and independently addressable liquid-crystal microchannels embedded in the fibre cladding. These channels modulate the polarized wavefront emanating from the fibre core, leading to a laser with a dynamically controlled intensity distribution spanning the full azimuthal angular range. This new capability, implemented monolithically within a single fibre, presents opportunities ranging from flexible multidirectional displays to minimally invasive directed light delivery systems for medical applications.

In answer to your question, no this isn't a hologram, although in some sense it achieves a similar goal. Regular screens control the emission of light as a function of position. Holograms control not just the intensity of the emanating light but also the phase; this phase information carries all the extra information about the light field passing through a given plane. This new device controls the intensity and angular spread of the light coming from each pixel, which is also thereby controlling the full shape of the light-field being emitted from the plane of the screen.

With both a hologram and this directional-emission concept, you're controlling the angular spread of the light coming from each point, are thus fully specifying the light-field, and thus creating 'proper 3D' that is physically-realistic and fully convincing. (Assuming you have enough angular resolution in your output to create the small differences the eye is looking for, of course.)

As for why they are using a laser as the source light, it's mostly because they want detailed polarization control. (Coupling lasers into fiber-optics is well-established technology for telecommunications.) By controlling the exact mode of the laser-light propagation through the fiber, they can control the polarization of the light that shines out of the fiber, and thereby use conventional tricks to modulate that light. In particular, in an LCD screen, small fields are used to re-orient liquid-crystal molecules, which then either extinguish or transmit the light (based on whether the orientation of the LC molecule is aligned with the polarization of the light).

Overall it's an ingenious trick: have a light fiber emit light with controlled polarization. Then have a series of LC pixels on the outside of the fiber, whose orientation can now not just modulate the intensity of emission as a function of position along the fiber, but also as a function of angle for each position along the fiber. The end result is that you control the light field emanating from the device, and so can (in principle) reconstruct whatever full-3D image you want.

Of course the prototype in the article only has four LC channels along the fiber. Enough to create a different image on the front vs. the back of the screen. Not nearly enough to create realistic 3D. Also they are only controlling the angle in one direction (around the fiber axis), and not the other (the tilt angle with respect to the fiber axis). But scaling up of the concept (where the fiber has thousands of LC polarizers for various angles) should allow for some really amazing display technology.

Comment Re:No headache? (Score 5, Informative) 52

Is there a word for where both eyes' 'beams' are pointing to?

That's usually called convergence. It's one of at least 5 ways that humans infer distances and reconstruct the third dimension from what they see:
1. Focal depth: based on how much the eye's lens has to focus
2. Convergence: based on the slight differences in pointing of the two eyes
3. Stereoscopy: based on the slight differences between the left and right image
4. Parallax: the different displacements/motions of objects at different distances (e.g. when you move your head)
5. Visual inference: reconstructing using cues like occlusion, lighting, shadows, etc.

As long as all 5 of those don't agree, the image won't look 'truly 3D': it will seem wrong at in many cases can cause headaches or nausea (your brain is getting conflicting information for which there is no physically-correct solution). The reason that current 3D systems fail is that they don't match all 5. A regular 2D movie (or a photograph, etc.) gives you #5 and that's it. This works actually remarkably well. Glasses-based 3D systems try to trick you by giving each eye a slightly different image, which adds #3, but since 1,2 and 4 are still wrong, the overall effect feels weird: your eyes still have to point at, and focus on, the movie screen. (It's even worse for 3D-TV since you are focusing on something relatively close to you.)

The reason this happens is precisely because a movie/TV screen has spatial resolution (each pixel is different) but no angular resolution (the image on the screen is the same regardless of where your head/eyes are positioned). If you could add back in the angular information (with enough resolution), then you could create an arbitrary light field, that was indistinguishable from a physically-realistic light field. If done right in terms of angular resolution and computing a physically-correct light field, then this would give you 1,2, 3, and 4. (And 5 also, if what's being projected is a realistic scene with proper shadowing and so forth.) If the light field is properly created, each eye will get a slightly different image (since each eye is at a slightly different angle with respect to the screen); these images will change as you move your head around; and your eyes will in fact NOT focus or converge on the location of the screen: they will focus and converge on the virtual image being created by the light field emanating from the screen. (This is similar to a hologram, which can be a two-dimensional sheet and yet reconstruct the light field that would come from a three-dimensional object, and can create virtual images that are not in the plane of the sheet.)

The prototype being demonstrated in this article is not good enough to do that, mind you: they don't have enough angular resolution to trick your eyes. However that's where this technology is headed, and if it's done at high enough resolution, we will finally get proper 3D: where we're not just tricking your eyes, but where we're actually projecting the correct light field towards the viewer.

Comment Re:Similar software (Score 2) 103

LastCalc looks absolutely amazing! I love Google's ability to do on-the-fly math with unit conversion, and it seems that LastCalc is giving us this and more! It's great.

A question for you (or a feature request, I suppose): how do we add more information to the behind-the-scenes taxonomy? For instance, if I go "2*pi*1 nanometers in angstroms" it correctly converts from "nanometers" to "angstroms". However if I use "nm" instead, it doesn't know what I mean. Of course I can add a definition "1 nm = 10 angstroms" and from then on it works correctly... but I don't want to have to add that every time I use LastCalc!

Presumably you have a database behind-the-scenes with taxonomies for various units. Is there any way for end-users to edit that taxonomy (wiki-style), or perhaps submit new relations/data for inclusion? Now that you're open-sourcing this project, it seems like you could take advantage of community involvement to expand and refine the taxonomy, making the system ever-more-powerful. (I see you have a Google Group... so, is the intention that people just discuss this in the that forum? Seems like it would be more efficient to have a wiki or open database where people (even non-programmers) could contribute suggestions for units/relations/etc.)

Anyways, thanks for your efforts on what looks like a great project. I hope you keep it up!

Comment Re:No (Score 5, Insightful) 502

That's, correct, the device is using both electrical and thermal energy input to generate light output.

Now, some people might still be bothered by this, because the idea of using ambient heat to do useful work is another one of those "perpetual motion machine" kind of claims. Heat represents a disordered (high-entropy) state, from which you cannot extract useful work. The relevant thought experiment here is the Brownian ratchet: the idea being that you have a ratchet that gets bombarded by random molecular collisions (in water or air, say). The ratchet will turn foreward when a random collision is strong enough, and so over time you can use this turning motion to wind a spring and thus convert random thermal motion into stored energy. The reason this doesn't work in real life is because if random thermal motion is enough to overcome the pawl on the ratchet, then the pawl will be 'hot' enough that it will randomly and spontaneously lift up, turning the wheel backwards. The only way to avoid this is to have the pawl at a lower temperature than the rest of the mechanism: this works, but it's well-known that you can extract useful work from a thermal gradient, so the laws of thermodynamics remain intact.

Coming back to this present result, how does this device use ambient heat to generate useful photons? Sure, it acts as a thermoelectric cooler, establishing a local thermal gradient, but this sounds like 'cheating' in that it's a way to extract energy from the entropy of the surroundings! The very first sentence of the scientific paper addresses this:

The presence of entropy in incoherent electromagnetic radiation permits semiconductor light-emitting diodes (LEDs) to emit more optical power than they consume in electrical power, with the remainder drawn from lattice heat [1,2].

Basically, the device is converting high-entropy thermal energy into even higher entropy incoherent electromagnetic radiation (light output). So, the second law of thermodynamics is not violated. Essentially, this device is acting as a way to connect thermal degrees of freedom to E&M degrees of freedom. The system, wanting to increase entropy as much as possible, tries to spread energy through all these degrees of freedom, which means creating some photons at the expense of some of the heat in the material.

It's a neat bit of physics, and will probably have implications for device efficiency and other applications.

Comment Nerd-rant deconstruction (Score 4, Insightful) 525

I know it's a sign of weakness to do a line-by-line rebuttal of flamebait, but TFA is seriously pissing me off.

Policy makers ... knew that music sales in the United States are less than half of what they were in 1999, when the file-sharing site Napster emerged, and that direct employment in the industry had fallen by more than half since then, to less than 10,000.

These statements are not backed up. Given the industry's history of exaggerating their claims, I put the onus on them to prove that these numbers are in any way correct.

Consider, for example, the claim that SOPA and PIPA were “censorship,” a loaded and inflammatory term designed to evoke images of crackdowns on pro-democracy Web sites by China or Iran.

Yet the author's use of "theft" and "piracy" are totally neutral, without any intent to evoke particular emotions in the readership?

When the police close down a store fencing stolen goods, it isn’t censorship, but when those stolen goods are fenced online, it is?

This is being purposefully obtuse. The claims of 'censorship' were about collatoral damage: that the laws would have a chilling effect and would be open to abuse. No one was directly equating "shutting down online counterfitting sites" with censorship. (Although, of course, the difference between shutting down a physical store and an online presence is indeed that the Internet is all about communication/data-transfer, and curtailing communication is essentially censorship.)

They also argued misleadingly that the bills would have required Web sites to “monitor” what their users upload, conveniently ignoring provisions like the “No Duty to Monitor” section.

This is an interesting claim. But if the author is sure that the "No Duty to Monitor" section protects conveyors of content, then why not spell that argument out in detail? Why not quote from the bill, and explain how this protection works? That is the very crux of the disagreement, it would seem, yet the author just mentions it in passing.

Apparently, Wikipedia and Google don’t recognize the ethical boundary between the neutral reporting of information and the presentation of editorial opinion as fact.

This is perhaps the only valid point in the entire piece. It is true that Wikipedia and Google (in very different ways) strive for some measure of neutral transmission of information. I can see how one could argue that using their position as trusted sources of information to spread their own viewpoint is an abuse. However:
1. This is begging the question, by assuming that what Wikipedia and Google were reporting was incorrect. But that is precisely what the debate is about: is it true that SOPA/PIPA would lead to collatoral censorship? If the claim is true (and as far as I can tell, it is), then Wikipedia spreading that information was just another manifestation of them spreading truthful statements.
2. These entities do have a right to let their opinion be known.
3. The opinion piece provides no reason why these companies would be misinforming the populace. What is it they hope to get out of it? Their stated reason is simple: that they wanted to stop the legislation because they couldn't continue operating under the legislation. The author provides no evidence, not even spurious reasoning in fact, for any other motivation. So, one could accuse them of being mistaken, but to accuse them of pushing an ideology is wrongheaded.

“old media” draws a line between “news” and “editorial.”

This is laughable. Mainstream media has a well-documented history of injecting bias into their reporting (everything from their selection of what to cover, to how events are described, to thinly-veiled editorials/opinions masquerading as 'balanced reporting').

The violation of neutrality is a patent hypocrisy: these companies have long argued that Internet service providers (telecommunications and cable companies) had to be regulated under the doctrine of “net neutrality” ...

This is a red herring of the highest order. The debate about net neutrality is about a very specific kind of neutrality.

And how many of those e-mails were from the same people who attacked the Web sites of the Department of Justice, the Motion Picture Association of America, my organization and others as retribution for the seizure of Megaupload, an international digital piracy operation? Indeed, it’s hackers like the group Anonymous that engage in real censorship when they stifle the speech of those with whom they disagree.

I see. Equating the massive outpouring of opinion with a minority of people who engage in illegal hacking (I'm surprised he didn't pull out the "terrorist" card). He can't fathom that the public actually believes what they are saying. He is certain that they are either misled or criminals (possibly both).

Perhaps this is naïve, but I’d like to believe that the companies that opposed SOPA and PIPA will now feel some responsibility to help come up with constructive alternatives. ... The diversionary bill that they drafted, the OPEN Act, would do little to stop the illegal behavior and would not establish a workable framework, standards or remedies.

So in one sentence he bemoans that the opposition is not coming up with any alternatives, and two sentences later mentions offhand that the opposition has, indeed, suggested an alternative. But he doesn't like that alternative. Moreover he calls that alternative 'diversionary'. As we can see, he is certainly above using "loaded and inflammatory terms" to make his point.

We all share the goal of a safe and legal Internet. We need reason, not rhetoric, in discussing how to achieve it.

The irony of course is that the original legislation was being pushed through without any public discourse. They wanted it to happen without the input of a myriad of stakeholders (like, the public). Only now that the entire process has been laid bare do they call for reasoned discussion. Again, his entire essay is incredulous that the public has the audacity to disagree with his plan. He is annoyed not so much with what Google and Wikipedia's opinions are, but that they brought this debate to the people... and that in an open, reasoned debate, his extremist plans cannot survive for long.

Comment Re:What's the point of journals? (Score 3, Interesting) 206

Transparency is generally a good thing, and I agree that many aspects of the publishing process are needlessly opaque. This should be fixed. But anonymous peer review has certain advantages. It provides an opportunity for reviewers to be completely honest. Think about a junior scientist reviewing a paper by a more well-established peer: they may fear that a critical review will seriously hurt their career. Think about scientists not wanting to be critical in a review of a friend's paper, or conversely people punishing papers because 'they rejected my last paper!' And so on. The journal editor serves the role of maintaining the anonymous peer review system. (Note that in anonymous peer review, the reviewer is still free to disclose their identity by signing their review; and indeed many scientists do this.)

Of course there could be ways to do anonymous peer-review in an open forum system (e.g. using trusted editor-like intermediaries, or using verifiable keys that can establish trust without disclosing who posted the review). It could be done; in fact nothing prevents all of this from happening right now (even now, authors could individually post their rejected articles, including all peer-review and editor comments, to their institutional websites; this at least partially happens through arXiv).

My point about efficiency was that for a given final state X, we can either tweak our current journal model until it reaches X, or we can start from scratch building a new initially inefficient system A, and then tweak that until we reach X. Both will have serious growing pains, but it seems to me that it will be easier (in particular, easier to get scientists on-board with the changes) by smoothly transitioning from the current system to the final desired state of X. Doing it smoothly means no downtime; each adjustment can be tested and the community can decide whether they like the change. So, again, I agree that there are many things about the journal system that could be fixed, and which modern Internet technology can help fix (open access, transparency, better logging of opinions/comments/etc., allowing any scientists to comment on any article, creating a space for public debates/discussions, etc.). I just think that the most kinetically favorable path to that new state is a series of changes from the current journal system (for all it's faults, the community is doing a lot of great science these days!).

Comment Re:What's the point of journals? (Score 3, Informative) 206

The problem with a free forum is signal to noise. It would have to have some kind of reputation system, such as scientists rating/flagging each other's contributions. That way, you could add some respected scientists to your 'trusted' list, and things that they trust would be highlighted/promoted to you. Essentially a web of trust model. This has obvious downsides, such as scalability and the inherent formation of cliques and the like.

The thing is that journals are actually a decent solution to these issues. They curate content on your behalf, and you decide which journals are more reputable than others. By doing some of the leg-work for you, they handle scalability and make the format relatively open to all comers. They also have the advantage of already existing: scientists already know which journals are better than others, understand the process of submitting to journals, and so on...

My point is that while you could entirely ditch the journals, and build a whole new system... this would be inefficient. It would seem simpler to take the current journal system, and just fix the things that are wrong with it (in particular, the exorbitant costs and the lack of open access). On the one hand, you may say it's hopelessly idealistic of me to expect for-profit journals to willingly move towards a more open format. On the other hand, there are already highly successful open-access journal ventures (e.g. PLoS), which are indeed pushing the journal system towards open access. So there is hope that we can reform the journal system.

Comment Re:Innovation (Score 5, Interesting) 449

Agreed. It's fashionable to decry any new UI ideas as stupid. And indeed many UI redesigns are a step backwards, or purely aesthetic, or confusing, ... I'm not a fan of Unity, for instance. But we have to be at least somewhat open to new UI ideas, or computer interaction will never move forward.

This particular idea seems really good to me. In fact it's something I've been wanting for a long time. There have been small pushes in this direction (e.g. the Ubiquity add-on for Firefox would let you type commands (like "map XXX" or "email page to XXX") and get immediately useful results), but for it to really work, from a user perspective, it has to be available in every application so that it's worth the cost to learn the new style.

Being able to search the menu structure is really powerful, especially for applications with loads of commands (photo editors, word processors, etc.). I've lost count of the amount of time I've wasted searching through menus for a command that I use infrequently. I know it exists, I've used it before... but does it count as a "Filter" or an "Adjustment" or an "Edit"? Why can't I just search for it? Moreover, I shouldn't have to train myself to remember where it was put. Once you get used to typing commands, it can be extremely fast to do so, becoming almost as fast as a keyboard shortcut. (Obviously this will be more the case in applications where your hands are already on the keyboard, like word processors; it could be slow in applications like photo-editing where your hand is usually on the mouse...)

The ability to rapidly invoke commands via the keyboard is something that I would think most slashdotters would love: it adds back in some of the power of the commandline. It also inherently streamlines across applications (you should be able to just type "Save" or "Preferences" in any application and get the expected behavior, regardless of where they put the menu item. If they're smart, they'll kind synonyms, so that "Options" and "Preferences" map to each other...)

While I am excited about all this, they do need to leave, in my opinion, the usual menu bar accessible and visible. The reason is simple: during the initial learning phase of an application, you don't even know what's possible. You need some way to explore the available commands, see what the app can do, and experiment. Only once you're somewhat familiar with the application does it make sense to quickly invoke commands with the keyboard.

Comment Re:Email is private? (Score 3, Insightful) 533

That's kinda silly. If I have a phone conversation in an empty room of a friend's house, then according to you it's not a private communication because I'm having it in a room controlled by someone else, and they could have bugged the room? Or if I write a personal letter in my office at work, it's not private because my employer may have installed a secret monitoring camera?

The fact is that there are social conventions afoot: for example that my friends don't bug their houses and that my employer hasn't installed secret cameras (some of these conventions are in fact backed-up by laws). As such, even though someone ~could~ intercept my communication, it is presumptively private and people who circumvented that would be accused of violating my privacy.

Similarly with networks. It's certainly possible for my friend to keylog their computer, or make copies of all traffic that passes through their router. But most sensible people would assume that this is not happening, and that doing so would be an invasion of the privacy of others.

So, email is private. That doesn't mean it's un-interceptable (neither is postal mail: it's trivial to grab someone else's mail and read it). But those who intercept it are violating privacy. (Of course if privacy is important to you, then you should take extra steps (e.g. encryption). But communications that you target towards a specific person are presumptively private.)

Comment Children acting childish... (Score 5, Insightful) 533

Giving out your password as a demonstration of trust is just silly. I trust my boss with work-related things, but that doesn't mean I give him the passwords to all the servers at work. Why? He doesn't need them. I trust my mom, but I don't give her my bank PIN. Why? She doesn't need it. I trust my girlfriend but I don't give her my gmail password. Why? Because she has no use for it. The difference between strangers and people I trust is that I ~would~ give friends/family secret credentials, if there was a valid need (e.g. I was sick and needed my girlfriend to perform a financial transaction for me). But giving out the details just for fun is illogical, and insecure.

Moreover, it's more a manifestation of a lack of trust. I don't care that I don't know my girlfriend's Facebook password... because I trust her. The only boyfriends/girlfriends who want each other's passwords are those who don't trust each other: they want to check up on what the other one is posting/saying. They don't trust them enough to let them have privacy or private conversations. I've seen this happen (my sister once had a jealous boyfriend who thought she was cheating on him and thus demanded access to her email and Facebook passwords so that he could check for himself... the relationship did not last).

Overall, this whole "if you loved me you'd give me your password" is infantile. The appropriate response is: "If you respected me you wouldn't ask for it."

Comment Re:This again? (Score 1) 589

In the abstract, you're right that we can't know a priori what the 'fair' distribution should look like. If everyone had equal opportunities to pursue their interests, free from any bias or pressure (either pressure to join or pressure to stay out), then the distribution within a sub-field could match the population at large (if interest in the sub-field is uncorrelated to the distribution parameter in question (ethnicity, gender, etc.)), or the distribution within a sub-field could be different from the population at large (if interest in the sub-field is somehow correlated to some parameter). An obvious example is age: the distribution of ages within a particular job do not match society at large. Nor would we expect them to: the ability and interests of people vary strongly with age.

Now that we've gotten that abstractness out of the way, let's consider the real world. In the general case answering these kinds of questions can be quite difficult, as bias can be stubbornly difficult to identify and quantify. On top of that, people's upbringing and sociological context will of course affect their opinions and interests. Is it enough for us to just provide equal opportunities? Or should it be a social goal to in particular reach out to people who, for whatever reason, don't realize that they would enjoy and be good at a work in a particular field?

But we can get more concrete still. Is the distribution of women in programming reflective of their true interest or is it somehow skewed by sociological biases/pressures/etc.? Well, here there is actually plenty of evidence. Firstly, that the gender differences in capabilities are either small or non-existent. The gender difference in, say, math skills between men and women are very small: far smaller than the distribution within the male population or female population, for instance. (This has been borne out in many studies.) Also, the gender differences are well-known to be varying as a function of time (getting smaller as social equality gets better), and to vary as a function of location and social context, suggesting that what differences are seen are due to environment and not due to intrinsic differences in capabilities. Taken together, all of this suggests that if we're just talking about capabilities, the distribution should be much closer to 50:50 than it is today.

Additionally, there is no lack of evidence for sexism still existing, and in particular still existing in high-education fields like math, science, and programming. So while it is difficult to say what the 'fair' distribution of men:women should be in the field of programming, we can be fairly sure that it is currently strongly skewed by the considerable sexism known to exist.

My point is that while it can be difficult in the really know what the fair distribution would look like (where everyone is making free, informed, unpressured choices)... we are demonstrably not yet at that point. There is still plenty of overt bias, sexism, and hostility. So until we have done a much better job of leveling the playing field, we shouldn't get sidetracked by esoteric 'ideal distribution' questions.

Slashdot Top Deals

"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis

Working...