There's nothing I find particularly alarming here and the behaviour is in fact pretty much what I would expect for computing sin(x). Sure, maybe the doc needs updating, but nobody would really expect fsin to do much better than what it does. And in fact, if you wanted to maintain good accuracy even for large values (up to the double-precision range), then you would need a 2048-bit subtraction just for the range reduction! As far as I can tell, the accuracy up to pi/2 is pretty good. If you want good accuracy beyond that, you better do the range reduction yourself. In general, I would also argue that if you have accuracy issues with fsin, then your code is probably broken to begin with.
There's a difference between attacking a piece of software and attacking the author. I personally have no opinion on systemd (hell, I don't even know what init system I'm running atm), but I feel like any complaint people have should be directed at whoever *chose* systemd, rather than who wrote it. You can't blame someone for writing software. If you don't like it, don't use it and/or tell distros not to use it.
Careful what you wish for. With the current generation, you might end up with iWar instead.
I like to get these scammers on the line for as long as possible, but without wasting my time. So far, what I've seen to work well was "Oh, my computer just crashed, I need to reboot" and "Now windows is applying updates". This means they'll wait without me having to think of stuff to tell them. Any other effective tricks?
In my opinion the peer-review should be changed to a double-blind system: the reviewer should not see name and affiliation of the authors, and judge the work as it would grade an undergrad paper (i.e. harshly). Like this I believe the signal-to-noise ratio in journals would increase, and only good papers would get published.
Please no! The problem with this approach (and it's already happening) is that what will get published is boring papers that bring tiny improvements over the state of the art. They'll get accepted because the reviewers will find nothing wrong with the paper, not because there's much good in there. On the other hand, the really new and interesting stuff will inevitably be less rigorous and probably more controversial, so it's going to be rejected.
Personally, I'd rather have 5% great papers among 95% of crap, than 100% papers that are neither great, nor crap, but just uninteresting. Reviews need to move towards positive rating (how many thing are interesting), away from negative ratings (how many issues you find in the paper). But it's not happening any time soon and it's one of the reasons I've mostly stopped reviewing (too often overruled by the associate editor to be worth my time).
It's going to be interesting when the Chinese government issues Google a warrant to get data from the US.
Software on Internet-connected devices is a bit different from your examples though. No matter how insecure cars are, it would be really hard for me to steal a million cars in one night, let alone without being caught. Yet, it's common to see millions of computers/phones being hacked in a very short period of time. And the risk to the person responsible is much lower.
It would certainly be nice, but it's not realistic. For a simple paper, it would likely cost a few thousands, but for anything that requires fancy material, it could easily run in the millions. The only level where fraud prevention makes sense is at the institution (company, lab, university) level.
So you're saying that reviewers should have to reproduce the results (using their own funds) of the authors before accepting the papers or risk being disciplined? Aside from ending up with zero reviewers, I don't see what this could possibly accomplish. Peer review is designed to catch mistakes, not fraud.
I think what is missing is that a) more reviewer actually need to be experts and practicing scientists and b) doing good reviews needs to get you scientific reputation rewards. At the moment,investing time in reviewing well is a losing game for those doing it.
Well, there's also the thing that one of the most fundamental assumption you have to make while reviewing is that the author's acting in good faith. It's really hard to review anything otherwise (we're scientists, not a sort of police)
I agree that good reviews do not need to be binary. You can also "accept if this is fixed", "rewrite as an 'idea' paper", "publish in a different field", "make it a poster", etc. But all that takes time and real understanding.
It goes beyond just that. I should have said "multi-dimensional" maybe. In many cases, I want to say "publish this article because the idea is good, despite the implementation being flawed". In other cases, you might want to say "this is technically correct, but boring". In the medical field, it may be useful to publish something pointing out that "maybe chemical X could be harmful and it's worth further investigation" without necessarily buying all of the authors' conclusion.
Personally, I prefer reading flawed papers that come from a genuinely good idea rather than rigorous theoretical papers that are both totally correct and totally useless.
This is not a new phenomenon, it seems to just be getting worse again. But remember that Shannon had trouble publishing his "Theory of Information", because no reviewer understood it or was willing to invest time for something new.
That's the problem here. Should the review system "accept the paper unless it's provably broken" or "reject the paper unless it's provably correct". The former leads to all these issues of false stuff in medical journals and climate research, while the latter leads to good research (like the Shannon example) not being published. This needs to be more than just binary. Personally I prefer to accept if it looks like it could be a good idea, even if some parts may be broken. Then again I don't work on controversial stuff and nobody dies if the algorithm is wrong. I can understand that people in other fields have different opinions, but I guess what we need is non-binary review. Of course, reviewers are also just one part of the equation. My reviews have been overruled by associate editors more often than not.
The entire world rejected the "I was just doing my job" and "I was just taking orders" excuses during the Nuremberg trials.
You should read about the Milgram experiment.
It's all about cost. It costs resources to break keys or break into machines. If you increase the cost by 10x, then they can break only 1/10 of what they could originally break using the same budget.
Don't worry, weekly recalls for firmware updates will totally fix the problem.
You think progress is slow now? See what happens when companies actively hide how they do things rather then relying on patients to protect their IP.
Yeah, imagine all these iPhone owners with rounded corners they can't even see because Apple had to hide them.