Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Money or Art? (Score 4, Informative) 175

If that's the message you get from TFA, then I can only assume that you gave up after the first few paragraphs. I'd recommend going and reading the rest. I don't see how you can square that message with this quote from TFA, for example:

Though I never intended for Auro to be a “retro-style” game, what I intended doesn’t matter at all, and it’s 100% my fault for failing to communicate in a language people understand.

Comment Re:$30 (Score 1) 515

Really? Last time I went to Edinburgh it was on the cheapest ticket type. The restriction was that I needed to go on the train that I booked, but that wasn't particularly arduous (and no different from a plane). The only time I don't buy those is when I'm coming from the airport and have unknown delays at immigration / baggage claim. As to the limited numbers, I think they're only sold 2 weeks in advance, but I've not normally found booking trains for a trip 2 weeks in advance to be a problem, and if it's an emergency then I would generally expect to pay a bit more.

Comment Re:An Old Story (Score 1) 386

C++11 has, for me, made the language tolerable. The old problem of C++ is still there: everyone agrees that you should only use a subset of the language, but no two developers agree on what that subset should be. Now, at least, there are things in the standard library that let you write APIs that have sensible memory management. shared_ptr and weak_ptr let you manage objects that can be aliased (with a small run-time overhead), unique_ptr lets you handle objects that can't be. Refactoring existing C++ APIs to use them takes a bit of time, but they're well worth it. With the addition of move constructors / r-value references to the language they can be implemented in such a way that they can trivially be stored in arbitrary collections, making them actually useful.

It's also been nice to see C++11 and C++14 supported by compilers and standard libraries quickly. C++14 was supported by Clang and libc++ by the time the standard was ratified by ISO. I think GCC and libstdc++ were only a couple of days later. Microsoft is still the slowest, but the latest versions of their compiler support most of the useful language features.

Comment Re:Pretty sure the heat death of the universe will (Score 1) 386

While this works as far as it goes, it restricts your library boundaries to POD types with no templates and no overloading. This doesn't completely defeat the point of using C++, but it does mean, for example, that you have to fall back to C-style memory management (no std::shared_ptr / std::unique_ptr, which modern C++ libraries should be using for pretty much anything that crosses an API boundary).

Comment Re:Pretty sure the heat death of the universe will (Score 1) 386

Just because a piece of software is old, doesn't mean it's suddenly doesn't do its intended function.

It usually does, because the intended function changes over time. This is particularly true for business software (COBOL's niche), where regulatory requirements change over time and as companies grow to cover more jurisdictions, where accounting best practices change, where the company structure changes, and so on. Eventually you get to the point where the software was originally designed to do something so totally different to what it's doing now that it may make more sense to rewrite it than to keep adding hacks.

Comment Re:Swift is destroying Rust. (Score 1) 270

I'm not sure how they're aggregating the data, but some of their source data is very surprising. Apparently Objective-C is the most popular language on GitHub projects, yet way down the list for projects tracked by Ohloh (which, as I recall, has been called OpenHub for a while now, so I don't know how old their data is). I'd have expected GitHub to be fairly representative of open source projects in general, though I wonder how good both the GitHub and Ohloh results are at deduplication - I have several copies of exactly the same code on GitHub...

Comment Re:No (Score 1) 276

40% is a difference that's usually easy to make up with some algorithmic tuning. It's less than the difference between -O0 and -O2 typically, and it's also less than the difference a recent C compiler will give you over an older one. You can easily lose 40% via various abstraction layers that people build on top of C to make it useable.

Comment Re:Who sets local storage policy? (Score 1) 276

In the end, the browser sets the policy, though most browsers make this mostly under user control (typically allowing a small amount without prompting and then asking the user for each increase), as the browser is the program enforcing the policy. I don't have an iOS device, so I can't tell you how it implements this.

Comment Re:Resource Proximity & Browser Limits (Score 2) 276

Could you ever imagine pro video editing (i.e. Adobe Premiere / After Effects) 100% within Chrome

Depends. With WebGL / WebCL, I can imagine preview effects there quite easily. I can also imagine that it would be nice to be able to do the real rendering runs on a rack somewhere else. The more difficult thing is imagining the multiple GBs of data between the two. Possibly uploading the raw source data to the server and keeping the local copy and just syncing the non-destructive editing instructions would work.

Comment Re:Yeah, right ... (Score 1) 276

The "problem of needing offline access" most certainly has not been solved

Note that HTML5 does allow effectively unlimited (policy set by the user) local data to be storage and applications that run completely disconnected. It's possible to write a web app that uses the browser for the UI, but only uses the network for software updates.

Comment Re:No (Score 3, Interesting) 276

You can show me the micro-benchmarks all day long; doesn't change the fact that a complex UI in JavaScript is vastly slower.

You're conflating JavaScript and DOM. With FTL, JavaScriptCore can run C code compiled via Emscriptem to JavaScript at around 60% of the speed of the same C code compiled directly. That's not a huge overhead (40% is a generation old CPU, or a C compiler from 5 years earlier). Transitions from JavaScript (or PNaCl compiled code) to the DOM, however, are very expensive. This is why a lot of web apps just grab a canvas or WebGL context and do all of their rendering inside that, rather than manipulating the DOM. Optimising the DOM interactions without sacrificing security is quite a difficult problem.

Comment Re:There will always be a need... (Score 1) 276

Web app doesn't necessarily imply web app hosted by someone else. For companies, there's a lot of advantage in being able to roll out cheap client machines that just run a web browser and have all of the apps in a single rack somewhere. To upgrade everyone in the company, just upgrade a single install. Don't worry about employees that can't remember to always save data on the fileserver where it's backed up, because you've configured the web apps to only be able to save there (or to always save a copy there).

Slashdot Top Deals

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...