Over 15 years ago Segfault.org reported their classic: "What If Linus Torvalds Gets Hit By A Bus?" - An Empirical Study. If we learned anything from that, it's that we also have to watch out for muffins.
It all depends on what you want to do with your matrices. Various operations have various costs in different sparse matrix formats. The standard ones are COO or coordinate format: a list of triples (i, j, val); DOK or dictionary of keys format: the hashmap you are thinking of; LIL or list of lists format: a list for each row and a list if pairs (j, val) in each list entry; CSR/CSC or compact sparse row/column: an array of indices where each row starts, an array of column indices and an array of values.
COO and DOK are great for changing sparsity structure; LIL is very useful if you have a lot of row-wise (or column-wise) operations, or need to manipulate rows regularly. CSR is great for matrix operations such as multiplication, addition etc. You use what suits your usecase, or change between formats (relatively cheap) as needed.
I've had sat psychologist administered IQ tests a year apart and had my score differ by 10 points. I've been told that, in fact, this is perfectly normal and well within the accuracy expected of IQ tests by psychologists who take them seriously. I wouldn't worry about IQ scores changing (they may well do that, but it is equally likely measurement error). IQ is a very imperfect measure to begin with. Our ability to measure it, even under the best of conditions, is extremely poor. Take most IQ studies with a grain of salt.
...not a sequel, but a cash-in remake.
It's not a Mad Max movie. The main character isn't Max, the atmosphere isn't Mad Max's, it just happened to have spiked cars chasing plated cars in the wastland.
Indeed. What they should have done was get the writer/director of the original film, who I gather had been trying to get a sequel made for over a decade, to come and write and direct the new one. Clearly whoever they got to write this didn't really understand Max's character at all.</sarcasm>
I believe Ada has pretty decent performance; The classic "Language Shootout" game has it scoring faster than Rust for the most part.
Scott Meyers is his usual excellent self: http://www.artima.com/shop/ove...
It's stupid if you're benchmarking relative efficiency -- it's not an efficient implementation (and you'll have no trouble finding explanations for why the Python and Java code they wrote, while simpler, is not efficient).
And this is why we should not teach CS101 in Java or Python. If they'd been forced to use C this whole experiment would have turned out differently.
Not at all. If you wrote your C in memory string handling as stupidly as they wrote the Python and Java you will still get worse performance in C (e.g. each iteration malloc a new string and then strcpy and strcat into it, and free the old string; compared to buffered file writes you'll lose). It's about failing to understand how to write efficient code, not about which language you chose.
Write-only code is easy in ANY syntax.
Write-only code is possible in any language. Some languages make it easier than others.
Link to Original Source
I'm guessing the reason he doesn't take money from the fossil fuel industry is because he just can't be bothered with such trifling sums. The average salary in the US is more like $350k or $400k, IIRC. 120k is for total losers.
I can only presume your talking about research grants combined with salary, despite saying "The average salary" because otherwise you are simply flat wrong. The average salary for (full) professors in the US is $98,974.
Should we teach everyone basic first aid and CPR, fundamentals of mechanics, and the basics of how to sew, cook, etc.? Yes, yes we should.
I don't think the "Teach Everyone to Code" movement is about making everyone professional programmers; it's about ensuring that everyone gets exposed the basics of how programming works, just like they get exposed to the basics of a great many other things in their schooling.
The results were startling. After re-running the election 100 times with a randomly drawn nonpartisan map each time, the average simulated election result was 7 or 8 U.S. House seats for the Democrats and 5 or 6 for Republicans. The maximum number of Republican seats that emerged from any of the simulations was eight. The actual outcome of the election — four Democratic representatives and nine Republicans – did not occur in any of the simulations. "If we really want our elections to reflect the will of the people, then I think we have to put in safeguards to protect our democracy so redistrictings don't end up so biased that they essentially fix the elections before they get started," says Mattingly. But North Carolina State Senator Bob Rucho is unimpressed. "I'm saying these maps aren't gerrymandered," says Rucho. "It was a matter of what the candidates actually was able to tell the voters and if the voters agreed with them. Why would you call that uncompetitive?"
Seems to me that Apple is playing catch-up in the phablet arena. Apple was late to the party and lost the toehold because of its tardiness.
No, no, you're looking at this all wrong. Apple stayed out of the Phablet market until they were "cool/hip/trendy". The vast sales Samsung had were merely to unimportant people. Apple, on the other hand, entered the market exactly when phablets became cool, because, by definition, phablets became cool only once Apple had entered the market.
Logic is a binary function. Something is in a logical set - or it is not. Being illogical is not a synonym for being mistaken. Degrees of precision are irrelevant for set inclusion. Fuzzy logic is not logic.
Fuzzy logic is logic. So are linear logic, intuitionistic logic, temporal logic, modal logic, and categorical logic. Just because you only learned Boolean logic doesn't mean there aren't well developed consistent logics beyond that. In practice bivalent logics are the exceptions.