To reinforce this: Building your own engine is a great way to fall into a trap that you won't escape for years if not decades. If you're interested in building a game, then you should build a game, and use the toolchain that lets you get to your game as quickly as possible. Right now that's a race between Unity and Unreal Engine 4, and while neither one has perfect support, I'd say the userbase for Unity is sufficiently deep that it's a better starting point. Don't worry, you'll still get plenty of opportunity to code (and to learn C# - another reason to go with Unity over UE4, if that's the language you really want), but there'll be enough that you don't have to code that you can focus on the fundamentals of building your game.
My reading of Norvig's blog post is that he suggests his specific approach (stapling together short regexps with ORs) requires solving the NP-Complete Set Cover problem, but he doesn't actually say anything about whether the core problem (match everything in list A and nothing in list B) does; it's conceivable that e.g. some sort of dynamical programming approach could do the job more efficiently than Norvig's algorithm does. Does anyone know whether the root problem (to avoid having to do the optimization, just phrase it as 'is there a separating regexp for the sets A and B of length less than k?') is specifically known to be NP-complete?
With the exception of entertainment and the rare 'culturally relevant' application, the vast majority of software is primarily a tool to get its job done, rather than an item of artistic merit in its own right. The New York Times reviews are — for the most part — cultural reviews; they're not the appropriate venue for most software reviews.
With that said, there are those exceptions where one can speak about the artistic or cultural merits of a piece of software, and my strong impression is that the Times has never really stopped speaking about those. The difference between the '80s and today is that at that point, there was so much less understood and so much more that was new in the world of software that a lot of what came out was of cultural relevance and worth talking about on those merits.
I'm not going to argue with 'severely over budget' (thank you, Prop 13) but the last information available for California suggests that federal spending in the state was substantially less (by roughly 25%) than federal revenues from the state; California is, on a per-capita basis and certainly on an overall basis, one of the largest net givers to the federal budget, not a taker. Do you have any specific reason to believe that that's changed in the last few years?
Not just mammalian neurons, but invertebrate neurons too. I think that until we surpass what MomNature has already bioengeineered and abandoning the VonNeumann/Turing model of how a computer is "supposed to be" that we will not construct anything AI that is more performant than what already exists in biological systems.
Almost every day I move around in a vehicle that's faster on land than anything 'Mom Nature' has produced. Several times a year I fly through the air in one that's almost an order of magnitude quicker. We took one of Nature's apex predators, carefully crafted through millions of years of evolution, and in maybe _one one-thousandth_ the time we turned it into the Pomeranian. I'm not sure why you believe we're so far behind nature, or why 'artificial' approaches are so doomed to failure compared to a natural simulationist approach, when we have overwhelming evidence to suggest the opposite.
Link to Original Source
While not often directly mathematical, several of Jorge Luis Borges's short stories are interesting efforts on his part to grapple philosophically with many of the concepts of infinity: The Library of Babel most famously, but also great stories like The Book of Sand, The Aleph, and even Death and the Compass. They won't necessarily tickle you in the same way that Stephenson's work did, but they're still a fine jumping-off point into fascinating and deeply philosophical mathematics.
Link to Original Source
Everyone else on the net seems to point to the article in the NY Post (not exactly known for its careful fact-checking) and the Post article talks about Hulu 'taking its first steps' without a single mention of what those steps are. No other news stories I can find in the last several days talk about any changes occurring to Hulu's model (other than more original programming) or the Hulu user experience. So what the hell is the Post talking about, exactly? What evidence is there — beyond some editorial negative-wishcasting — that anything like this is going on?
Compared to Windows as of 15 years ago, maybe. The Windows APIs the last few years have been mature enough that while diverse hardware testing can still improve the user experience, it's gotten substantially less necessary for game developers. That just isn't the case for Android games.
I already have half a dozen different devices in my house that I can compile on, most of which have better storage and faster CPUs. Why do I care if I have another one? My iPad isn't trying to replace one of my computers and it seems silly to judge it by the same standards.
TFA seems to be long on speculation and short on actual data. Obviously streaming video isn't bandwidth-cheap, but does anyone have real figures on how much data streaming, say, a standard 24-minute TV show would take, and how many episodes it would take to hit the 2GB monthly cap? If they can, for instance, stream a low-quality episode in 10-20 MB then this seems like much sound and fury over very little...
Isn't this effectively the core purpose of copyright law? A hundred years of precedent suggest that my free speech rights don't extend to, for instance, performing my own stage production of Spiderman for all the world to see, or for writing and selling (or giving away) my word-for-word version of "Arguing With Idiots", and I'm not sure why anyone would expect results in the digital world to be any different.
Strange, then, that the 'bitcoin isn't worth the energy it's minted on!' article isn't the one that made Slashdot headlines...
The fascinating thing there is how Wizards "tricked itself" by misreading how certain cards form gamebreaker combos. So then they embarked on an elaborate "currency value adjustment" program, aka Type 2. (With all the spinoffs etc. In my areas "1.5" and "Legacy" and so on were never very popular.)
By being relegated to "Type 1" All those power cards were effectively cordoned off into a backwater, and lost most of their effective value. Then as the years rolled on, once cards left Type 2, they also dropped in value like a stone.
Well, except that this didn't happen. Yes, a number of cards definitely lost value once they fell out of Type 2 - but the price of the core type-1 power cards has never actually gone down, and in fact Legacy has meant that a number of secondary cards from that era have now skyrocketed. A white-border Black Lotus will set you back more than a thousand dollars; a black-border one you'd be lucky to find under two thousand. All the dual lands are north of $50 in white-border now and more than $200 in black border; half-blue duals are at least $5-600 each in black border. Some of the cards from the earlier sets (esp. Arabian Nights) have seen corrections, but that's more a matter of the market realigning itself around playability rather than just rarity - old out-of-print cards that see any tournament play at all (or even saw tournament play at one point) have skyrocketed (Karakas at $50-60, Sylvan Library at $25, etc.)