Comment Re:Money or Art? (Score 4, Informative) 175
Though I never intended for Auro to be a “retro-style” game, what I intended doesn’t matter at all, and it’s 100% my fault for failing to communicate in a language people understand.
Though I never intended for Auro to be a “retro-style” game, what I intended doesn’t matter at all, and it’s 100% my fault for failing to communicate in a language people understand.
C++11 has, for me, made the language tolerable. The old problem of C++ is still there: everyone agrees that you should only use a subset of the language, but no two developers agree on what that subset should be. Now, at least, there are things in the standard library that let you write APIs that have sensible memory management. shared_ptr and weak_ptr let you manage objects that can be aliased (with a small run-time overhead), unique_ptr lets you handle objects that can't be. Refactoring existing C++ APIs to use them takes a bit of time, but they're well worth it. With the addition of move constructors / r-value references to the language they can be implemented in such a way that they can trivially be stored in arbitrary collections, making them actually useful.
It's also been nice to see C++11 and C++14 supported by compilers and standard libraries quickly. C++14 was supported by Clang and libc++ by the time the standard was ratified by ISO. I think GCC and libstdc++ were only a couple of days later. Microsoft is still the slowest, but the latest versions of their compiler support most of the useful language features.
Just because a piece of software is old, doesn't mean it's suddenly doesn't do its intended function.
It usually does, because the intended function changes over time. This is particularly true for business software (COBOL's niche), where regulatory requirements change over time and as companies grow to cover more jurisdictions, where accounting best practices change, where the company structure changes, and so on. Eventually you get to the point where the software was originally designed to do something so totally different to what it's doing now that it may make more sense to rewrite it than to keep adding hacks.
Could you ever imagine pro video editing (i.e. Adobe Premiere / After Effects) 100% within Chrome
Depends. With WebGL / WebCL, I can imagine preview effects there quite easily. I can also imagine that it would be nice to be able to do the real rendering runs on a rack somewhere else. The more difficult thing is imagining the multiple GBs of data between the two. Possibly uploading the raw source data to the server and keeping the local copy and just syncing the non-destructive editing instructions would work.
The "problem of needing offline access" most certainly has not been solved
Note that HTML5 does allow effectively unlimited (policy set by the user) local data to be storage and applications that run completely disconnected. It's possible to write a web app that uses the browser for the UI, but only uses the network for software updates.
You can show me the micro-benchmarks all day long; doesn't change the fact that a complex UI in JavaScript is vastly slower.
You're conflating JavaScript and DOM. With FTL, JavaScriptCore can run C code compiled via Emscriptem to JavaScript at around 60% of the speed of the same C code compiled directly. That's not a huge overhead (40% is a generation old CPU, or a C compiler from 5 years earlier). Transitions from JavaScript (or PNaCl compiled code) to the DOM, however, are very expensive. This is why a lot of web apps just grab a canvas or WebGL context and do all of their rendering inside that, rather than manipulating the DOM. Optimising the DOM interactions without sacrificing security is quite a difficult problem.
This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian