I think it is much like a choice of attitude. I think it is generally agreed that overall women are not as risk-taking as men. Negotiating pay involves risk - it involves the risk of being turned down, og giving a bad impression of yourself. Of course it also involves benefits, namely a higher pay. This is of course especially true if done wrong. At any rate its an extra effort compared to just blindly accepting the deal you get.
For this reason, I don't think there's any wrong in people negotiating potentially getting a higher pay - be it men or women. It's a personal choice whether you want to run that risk or not. Given the nature of markets, there's nothing strange about individuals being able to earn more by taking on more risk or doing an extra effort. In fact it seems unavoidable.
Given my experience with HR departments etc. I think it is pretty obvious that the "offer" Reddit is going to give is highly unlikely to match the individuals skills. IT is an area where higher skills can give extreme multiples in added productivity, even though this is never shown at the pay slip. So it is notoriously difficult to value someone based on just average tables based on years of eduction, experience etc. Often the true value of an employee can only be assessed some time after hiring. This is where negotiation comes in (and here I'm also talking about post-hiring salary adjustments) , because it allows the higher skilled workers to try and get some compensation for the greatly added productivity they bring, beyond what the "averages" indicate. If Reddit eliminates this aspect, they will either have to give up hiring from this pool (because their initial offers are only around average and everyone knows negotiation isn't possible) or they will have to give very high initial offers, meaning they will overpay a lot of people.
Of the two alternatives, the latter is the safest since you can always get rid of those who you turned out to be overpaying, whereas if the initial offer is too low, you are never going to see the top talent at all.
But besides from that - isn't Reddit just a trivial news web site? It's hardly a technological marvel, any semi-skilled person could quickly code up something like that. I suspect it was done only by a few people initially. Then a few truly skilled people are required to scale it up to handle the high load, but that's it. 5-6 good developers ought to be enough. I don't know how many employees they have, wouldn't be surprised it's in thousands given they have poor enough owners/mangement to higher Ellen Pao as CEO.
The programmer's job is to translate the requirements from other humans into requirements that a machine can understand. For very well-defined domains of applications, it is indeed possible for non-programmers to use tools that can achieve the same thing. That was the same with VB 1.0 or equivalents prior to that. I don't think the scope of possible applications that can be written in this way has broadened much in the decades after. Applications that is just a dialog with very simple logic can be written by drawing the dialog and copy-pasting and adapting small statements etc.
Beyond that there's not been much progress in "auto-writing" applications. The reason is, that the current programming languages are actually fairly efficient and precise ways of explaining to a computer what it should do. The job of the programmer is to translate the "business" requirements into that specification.
Even for a fairly trivial stand-alone program computers can't do this today. Even if that were to happen, consider that much of the programmer's time is not spent writing code within some confined, well-defined environment, just writing line after line. Rather, it is spent understanding the customer-specific requirements, realizing if they are impossible/inconsistent and working out a solution together with the customer, integrating multiple systems and many different levels in those systems using completely different interfaces and paradigms, and figuring out how it is all going to work together.
My experience is that most automatic code-writing and analysis tool stop working when you add this layer of complexity. They may work well for a completely stand-alone Java application with a main() method, but how does it work when you have client code, native code, server code, 10 diff. frameworks, 3rd party libraries some written in other languages, external web services, legacy systems, end-users who think about things in different ways than the internal data models etc, implicit contracts/conventions, leaky abstractions, unspecified time-out conditions, workarounds required due to bug in other systems and difference in end-user setups? The complexity and bugs here is always resolved by human programmers and I suspect it will be that way for a long, long time even if computers manage to be able to write stand-alone programs themselves (which IMHO will be the point where we have something approaching general AI). While you can take individual aspects of these problems and formalize them, the amount of combinations and the diversity in scenarios is so big, that it becomes impractical.
If you have competent programmers, most of the bugs are going to be in these areas, since it is also easy for a competent human programmer to write a stand-alone program that works well on one machine.
With this in mind, it's concerning with this big ramp-up in number of CS-trained individuals. I feel we have been at the bottom of the barrel for some years already. Given that it has been well-known to everyone for many years that IT is one of the easiest areas to find employment in and that the salary is comparatively good, and the constant media focus on smartphones, apps and whatnot, it seems reasonable to assume that most people with just a faint interest and ability in IT would have pursued that path already. With this ramp-up, it seems there's a high risk that the market will be flooded by sub-par candidates and that it will be much more than what the market is already absorbing. The result will be massive unemployment among those newly trained CS-people, who were never meant to study CS to begin with.
So in short, women don't want to contribute to Wikipedia because the personal gains to them are small. Men do it for the fun of it, for the creative process.
He came up with the idea of using lossy compression techniques to compress the original file, then calculating the difference between the approximate reconstruction and the original file and compressing that data; by combining the two pieces, you have a lossless compressor.
This type of approach can work, Misra said, but, in most cases, would not be more efficient than a standard lossless compression algorithm, because coding the error usually isn’t any easier than coding the data.
Well, this is almost how MPEG movie compresion works - and it really does work! MPEG works by partly describing the next picture from the previous using motion vectors. These vectors described how the next picture will look based on movements of small-ish macroblocks on the original picture. Now, if that was the only element of the algorithm movies would look kind of strange (like paper-doll characters being moved about)! So the secret to make it work is to send extra information allowing the client to calculate the final picture. This is done by subracted the predicted picture from the actual next frame. This difference-picture will (if the difference between the two frames was indeed mostly due to movement) be mostly black but it will contain some information due to noise, things that have become visible due to the movement, rotation etc. So MPEG also contains an algorithm that can very efficiently encode this "difference picture". Basically an algorithm that is very efficient for encoding an almost black picture.
So there you have it - MPEG works by applying a very crude, lossy compression (only describing the picture difference in terms of motion vectors) and in addition transmitting the information required to correct the errors and allow reconstruction of the actual frame.
The only part where the comparison breaks down is that MPEG is not lossless. Even when transmitting the difference picture, further compression and reduction takes place (depending on bandwidth) so that the actual frame can not be reconstructed 100%. Still MPEG is evidence that the basic principle of using a crude lossy compressor combined with sending a compensation, works.
So he definitely lied.
It has also been associated with other cancers. In contrast to this, the alternative painkiller aspirin has been associated with a decreased risk of many different types of cancer, notably colon cancer but also blood cancers as well as many other cancers (too lazy to find all the references). In animals, it can protect animals exposed to other carcinogens from cancer.
Ibuprofen seems to be mixed, in some studies it is associated with increased cancer in other with decreased risk.
By the way, note that aspirin is not a magic bullet against cancer; it is also a blood thinner. People who take it chronically are prone to getting bruises from the slightest hits. This also means it increases the risk of potentially fatal bleeds, as well as certain types of strokes (intercranial haemorrage). On the other hand it decreases the risk of blood clots. The net effect is difficult to assess.
The problem is that it used to be that PhD students were the best students of that year, at least in the natural sciences. I remmeber that when I studied only 1 PhD student was accepted every other year, and he/she would always be one who was very clearly (after a 2 min. conversation) smarter than the rest of the students. Of course that person would also have top grades -- and that was when top grades were more scarce that is the case today.
Today it's not like that. Everything from the top students and down to the "joe average" (i.e. someone with half a brain) in the graduate studies usually gets offered a PhD. In fact, many programs have more PhD's than they can fulfill from the pool of candidate students. There's no prestige about getting a PhD anymore. And since the financial crisis came, it's in some way the opposite, in that the PhD is an obvious path to take for someone who couldn't get a better paying job in the private sector.
This state of affairs is extremely expensive to society. It's expensive to provide an extra 3-year education and salary to people who are just average intelligence. Also, the science that comes out of this is mostly bogus. I.e. not really innovative, just buying a lot of fancy equipment and applying what others already found out. Then publishing it with a twist. There has already been some articles out on how little the science produced in academia is contributing to the economy, and there has also been a few news stories about how the "too many PhD's" has lowered the quality. But it still seems most politicians are in the uncritical "more education" mode... so I suspect it will be a few years until anything substantial happens in this area.