In that case I would be asking what what Apple wants to do with distributed graph analytics because that was probably Turi's most interesting/unique product and expertise. They have a great library for handling extremely large graphs distributed over many nodes, and a lot of expertise in exactly how to do that really well.
I have to admit to being a little unclear as to Apple's plans here. I'm somewhat familiar with Turi's product offerings (at least, I was back when they were called Dato). It's more of pure data analytics tool than anything, and personally I found the underlying python libraries which are open source far more compelling than the point and click predictive analytics and charting GUIs which seemed to be their main product. And even on that front I would put more stock in scikit-learn, pandas, dask and the many open source deep learning libraries (mostly built on theano and tensor flow) if I really wanted to do machine learning and distributed machine learning.
Now don't get me wrong, Turi has some nice products, but they tend to be standalone suites designed to let front-line analysts have a nice GUI interface to basic machine learning tools, not "push the envelope AI". I really can't see what Apple would do with it beyond build up a business analytics suite to compete with Tableau and Azure ML. Anyone have any better ideas?
Well it's a combination of the automatic array flattening, and the fact that perl subroutines take an arbitrary number of arguments and just shift them off the list, allowing you to "overwrite" expected arguments with the flattened list -- ideally you might hope for an error with "too many arguments to function foo". The end result is that as long as you can force things to be arrays (or lists) when people don't necessarily expect that (like, say, the cgi module handling mutiple parameter definitions) you can get up to some serious funny business.
UBI as generally described is not increasing the real money supply, but instead reallocating the existing money supply. That leaves plenty of scope for lack of inflation of things that aren't currently production limited. Prices will rise on some things (housing, for instance, is at a shortage and UBI would add demand without any immediate ability to increase the supply to meet it), and not on others (we generally have food surpluses on basic staples in developed countries, there is not reason to expect general price increases on many of those). Predicting the end results of such a complex system is hard, and I agree with the previous poster that hand-waving and glittering generalities will certainly not cut it.
No his Duebel are quite important as well. Seriously they made things mucho easier when fasting anything to concrete.
Play on words? Fisher Technik pieces is what I used to play with when I was a kid.
Over 15 years ago Segfault.org reported their classic: "What If Linus Torvalds Gets Hit By A Bus?" - An Empirical Study. If we learned anything from that, it's that we also have to watch out for muffins.
It all depends on what you want to do with your matrices. Various operations have various costs in different sparse matrix formats. The standard ones are COO or coordinate format: a list of triples (i, j, val); DOK or dictionary of keys format: the hashmap you are thinking of; LIL or list of lists format: a list for each row and a list if pairs (j, val) in each list entry; CSR/CSC or compact sparse row/column: an array of indices where each row starts, an array of column indices and an array of values.
COO and DOK are great for changing sparsity structure; LIL is very useful if you have a lot of row-wise (or column-wise) operations, or need to manipulate rows regularly. CSR is great for matrix operations such as multiplication, addition etc. You use what suits your usecase, or change between formats (relatively cheap) as needed.
I've had sat psychologist administered IQ tests a year apart and had my score differ by 10 points. I've been told that, in fact, this is perfectly normal and well within the accuracy expected of IQ tests by psychologists who take them seriously. I wouldn't worry about IQ scores changing (they may well do that, but it is equally likely measurement error). IQ is a very imperfect measure to begin with. Our ability to measure it, even under the best of conditions, is extremely poor. Take most IQ studies with a grain of salt.
...not a sequel, but a cash-in remake.
It's not a Mad Max movie. The main character isn't Max, the atmosphere isn't Mad Max's, it just happened to have spiked cars chasing plated cars in the wastland.
Indeed. What they should have done was get the writer/director of the original film, who I gather had been trying to get a sequel made for over a decade, to come and write and direct the new one. Clearly whoever they got to write this didn't really understand Max's character at all.</sarcasm>
I believe Ada has pretty decent performance; The classic "Language Shootout" game has it scoring faster than Rust for the most part.
Scott Meyers is his usual excellent self: http://www.artima.com/shop/ove...
It's stupid if you're benchmarking relative efficiency -- it's not an efficient implementation (and you'll have no trouble finding explanations for why the Python and Java code they wrote, while simpler, is not efficient).
And this is why we should not teach CS101 in Java or Python. If they'd been forced to use C this whole experiment would have turned out differently.
Not at all. If you wrote your C in memory string handling as stupidly as they wrote the Python and Java you will still get worse performance in C (e.g. each iteration malloc a new string and then strcpy and strcat into it, and free the old string; compared to buffered file writes you'll lose). It's about failing to understand how to write efficient code, not about which language you chose.
Hackers are just a migratory lifeform with a tropism for computers.