To be fair, Slashdot's summary is not worse than the paper's summary.
There's a long list of issues with their methodology, and they make a fair assessment of these in the "Threats" part, which BTW should be discussed in the article, and not in the appendices.
As a whole, this paper reeks "We wanted to show how / how much women were discriminated against in Open Source. Our findings showed the opposite, so we kept making up criteria until one would exhibit (barely) the bias we wanted to denounce."
Of course when you're doing that, you're just begging to fall for this.
Non-exhaustive list of other issues I noticed:
- Weighing issues: for example, how many commits from outsiders vs insiders. Given that, overall, women get better acceptance, I can conclude than insiders commit more than outsiders (in their dataset)
- Missing stats (for example, we get gendered stats on whether a pull request is linked to an issue, but no insider / outsider distinction)
- Plain old lies in the summary ("when a woman’s gender is identifiable, they are rejected more often" vs "Women have lower acceptance rates as outsiders when they are identifiable as women.")
- Failure to mention that the error bars are for the strict dataset. I suppose this is standard practice, but the dataset error bars are probably swamped by the non-representativity of the dataset in the first place, and the methodology shortcomings, which means that they're misleading (nobody cares about their dataset). They don't make any effort to evaluate these errors (obviously that would be the hard part), and leave us with some hand-waving like "we are somewhat confident that robots are not substantially influencing the results".
- Graphs that start at 60% to exaggerate differences (without using broken axis)
- Using "theory" for "hypothesis"