Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:More evidence of the W3C's increasing irrelevan (Score 1) 205

._. The original plan of WhatWG was to stabilize in 2022 . Yes. 12 years from now. But the devil is in the details and I am not 100% up to date on the W3C details. I get the feeling that the W3C is trying to be the Debian of standardization organizations though. Slow and stable.

The WHATWG never planned to stabilize in 2022. Ian Hickson, the editor of HTML5, predicted that it would take until 2022 to reach the equivalent of W3C Recommendation status: where every single feature has at least two independent implementations, plus a full test suite (which would require hundreds of thousands if not millions of tests for such a large spec). The spec becomes basically stable at the Candidate Recommendation stage, which Ian originally predicted to be 2012 (and that looks to be plausible so far). You can read more about this in the WHATWG FAQ.

Comment Re:More evidence of the W3C's increasing irrelevan (Score 2, Informative) 205

First off, Canvas is fucking redundant and never should have been created in the first place. SVG has existed since 2001. Canvas is a crappy JavaScript-only version of Canvas with half the features stripped out. There's no reason to use canvas in the first place - just use SVG. Most browsers support it and even if they don't there's good plugin support. And it's an actual released standard.

SVG is a declarative vector graphics format, while canvas is an immediate-mode 2D graphics API for JavaScript. Using SVG for a dynamic web application is kind of ridiculous – something as simple as ctx.fillRect(x, y, w, h) to draw a rectangle would require several lines of DOM methods.

HTML5 video is completely fucking useless, because:

1. You can't stream video. (No, not a file, I mean live video.)

You can stream live video just fine, if the format and browser supports it. I've heard reports of people getting this to work just fine with Ogg. This isn't the big use-case anyway, though.

2. You can't full screen HTML 5 video. (The spec forbids this as a security flaw.)

You can full-screen HTML5 video in Firefox 3.6 and later (IIRC) via the context menu. There's no standard JavaScript API to full-screen the video yet, because no one has worked out the security implications in enough detail yet: you'd have to design it carefully so that malicious scripts can't take over the screen (maybe just by copying what Flash does). WebKit has an experimental API in that regard. Plus, you can always just make the <video> tag fill the screen and let the user hit F11 if they want – YouTube does this, and it works pretty well (although it's not ideal).

3. There is no standard format, leaving you to encode an unknown number of versions. Hell, even if you stick with just H.264, you still need to encode to multiple profiles if you want to support everything.

HTML5 video is not yet usable as the only video format anyway, since you have to support IE

4. You can't seek in videos in anything remotely near a reliable manner. You know how you can link to a certain time in a Youtube video? Not possible in HTML5.

It's perfectly possible. YouTube has some JavaScript that will automatically seek the Flash video based on the fragment, and they could do exactly the same for HTML5 video. Seeking is as simple as setting video.currentTime. YouTube already uses this attribute to make its custom seek bar work.

5. You can't switch to lower/higher-bandwidth versions while the video is playing.

Of course you can. Just record currentTime, change the src, and then set the new currentTime. This is all trivial from JavaScript, much easier for a typical web developer than trying to communicate with Flash.

The HTML5 spec as is stands today is useless. The features it does offer above HTML4 already exist and are handled better via existing specs or plugins. Pretty much anything that isn't canvas or video isn't implemented anywhere, making the features entirely useless instead of done better elsewhere.

HTML5 is a vast spec that includes a huge number of improvements and refinements. The HTML5 parsing algorithm, for instance, will make a lot of weird websites work exactly the same across browsers when formerly they all behaved a bit differently – it's enabled by default in Firefox 4, and experimentally available for WebKit. This and many other clarifications are giving each new browser release more opportunity to be consistent with other browsers.

Canvas, video, and audio are not yet as reliable, widely available, or full-featured as their plugin-based equivalents, but they're suitable at least as fallbacks where the plugins aren't available (iOS, sometimes Linux). SVG and MathML embedded in HTML (not XHTML) documents are usable on some cutting-edge browsers, with painless ways to fall back to rasterized versions (requiring barely more work than just using the rasterized version). Microdata can be used today and some search engines will recognize it.

(Yes, you can do some of the above using plugins. But plugins don't work on iOS, don't work reliably on Linux, and don't integrate well with the page. If you want to do Flash video, you have to script it in ActionScript, not JavaScript. Plugins tend to have all sorts of nasty warts, like not obeying CSS or trapping commands that the user intends the browser to receive. I've seen Flash videos display on top of random content that was supposed to overlay the video, and I hate when keyboard shortcuts stop working when I have anything Flash focused. It's just a lousy solution.)

HTML5 includes a lot of nice little things that can easily be used to augment your existing pages without too much work, while keeping your legacy code for legacy browsers. Examples of this are the autofocus attribute, the placeholder attribute, and getElementsByClassName(), all of which are supported in recent enough Firefox and WebKit (for instance).

And it also makes a lot of convenient things valid that browsers already supported anyway. This includes <meta charset="">, omitting quote marks and closing tags in more cases than HTML 4 permitted, standardizing innerHTML, standardizing contenteditable (although not precisely enough for real interop yet), standardizing autocomplete, and permitting custom attributes if they start with "data-".

I encourage you to read the Editor's Draft of HTML5 differences from HTML4 for a more comprehensive list of what HTML5 adds. Many of these features are supported in enough browsers to be useful today, which is why some HTML5 features are used by practically all major sites and all up-to-date JS libraries. There is no reason not to use the things that are useful today, which is a fairly decent amount – although nothing compared to what we'll be seeing five years from now when big features like canvas and video/audio are better-developed and more reliably available.

And, of course, all of this only talking about the HTML5 spec proper. If you include all the other things that are often labeled "HTML5", there's tons more to talk about: CSS3, Web Storage, Web Databases, WebSockets, and a host of other features. This is a time where the web is moving forward at a very rapid pace, in sharp contrast to the stagnation we saw five to ten years ago – and yes, quite a lot of features are already usable to some degree.

Comment Re:W3C is the problem (Score 1) 205

>To replicate, cut and paste this into your URL bar: >data:text/html, >Then type one or two letters in the password field (not more) and try to submit.

Chrome 6.0.472.63 works Opera 10.62-6438 fails Firefox 3.6.10 fails

(All on Linux x86-64

You're not getting what's I was testing. Opera prints the password in cleartext when you do what I described, no other browser does. Firefox 4 nightlies print an unhelpful error message (unhelpful because I provided no title attribute on the input element, for brevity, otherwise it would be helpful), and everything else just submits the form, but nothing else prints the password.

Comment Re:W3C is the problem (Score 4, Interesting) 205

Correction -- Firefox 4 is going to be Firefox's first release that begins to support the HTML5 form enhancements. Opera has already supported those form enhancements since version 9.5.

I quite deliberately said that Firefox 4 will be the first good implementation of HTML5 form enhancements. I wrote HTML5 form support for MediaWiki, but disabled it – partly because of an inexcusably bad WebKit bug, but also because Opera's support is just cruddy. The UI is terrible – red-bordered boxes that only appear when you try to submit the form, not when you actually do the invalid input.

And I quickly found one killer bug: if a password element doesn't meet its constraints, it outputs the currently-entered password to the screen in plaintext, so <input type=password pattern=....> to require passwords of at least four characters is a non-starter. I reported the bug to Opera around the time 10.00 was beta, and it's still not fixed in 10.60. To replicate, cut and paste this into your URL bar:

data:text/html,<form><input name=foo type=password pattern=...><input type=submit></form>

Then type one or two letters in the password field (not more) and try to submit. So, Opera's great and all, but its implementation of this stinks.

Comment Re:Jeeze. (Score 4, Insightful) 205

Why does it have to be implemented before it can become a finalized specification?

Because before it's implemented, it's just some words on a web page, and no one has actually tried it. Implementers inevitably spot parts that are vague, or too complicated or expensive or slow to implement, only when they actually try to implement it. Also, implementing it will mean it gets the regular security and UI review that all new browser features get, which will result in more feedback. And finally, you get almost no feedback from regular authors or users until it's shipping in at least beta versions of browsers. This is why no W3C spec can be declared finished without two interoperable implementations.

Another way of looking at it is that you could try speccing everything first, then implementing it. But it means that you miss a lot of things and wind up putting out a bad standard. Instead, web standards are usually developed in tandem with implementations, and are open to change as long as it's feasible if new information comes to light. They're only really set in stone when so much content depends on particular behavior that browsers can't change it without breaking websites – barring that, they can always be improved. Even Recommendations aren't final in practice, because they can be superseded by later versions. HTML 4.01 is a recommendation, but HTML5 contradicts it in many places, and takes precedence.

Comment Re:Jeeze. (Score 3, Informative) 205

It's probably the "need" for paper and in-vivo meetings.

If you didn't need them, standards would fly instead of committee members.

HTML5 uses no in-person meetings. The HTML Working Group charter at the W3C even says "This group primarily conducts its technical work on a Public mailing list". Everything is done through a combination of the mailing list and Bugzilla, with some IRC discussion thrown in on the side. There are teleconferences, but nothing important is done there, and the editor doesn't attend them – the decision policy requires that all requests for changes be made through Bugzilla and other web interfaces. There's also no paper involved anywhere.

Really, almost nothing at the W3C is in-person. People contribute from all over the world, both W3C members and non-members. In-person meetings are impractical. This is particularly true for HTML5 – the WHATWG version of the spec is really managed exactly like an open-source project with a benevolent dictator, not at all like a conventional spec.

The reason specs progress slowly is because it takes lots of programmer-hours to implement them correctly. Most of HTML5 is fully specced and just awaiting implementation. Programming is expensive work.

Comment Re:More evidence of the W3C's increasing irrelevan (Score 5, Informative) 205

When the draft spec for a technology that moves so fast and has so much widespread adoption is still deemed several years off I don't know how anyone can take their recommendations seriously. We're already at a level of fairly good interoperability amongst the core browser engines for the base features we need. If developers and designers took any notice of this then we'd probably all be still building sites with tables.

This is why the WHATWG – the body that originally developed HTML5, and which still develops a version in parallel to the W3C – abandoned the idea of rating the stability of the spec as a whole. The WHATWG spec version (which is edited by the same person as the W3C spec, contains everything the W3C spec does plus more, and has useful JavaScript annotations like a feedback form) is perpetually labeled "Draft Standard", and per-section annotations in the margins tell you the implementation status of each feature.

The W3C Process, on the other hand, requires everything to proceed through the Candidate Recommendation stage, where it gets feature-frozen, and therefore becomes rapidly obsolete. It's quite backwards, but doesn't seem likely to change soon. So for sanity's sake, you can just ignore the W3C and follow the WHATWG instead.

(I really doubt that Philippe Le Hegaret actually said anything like what he was quoted as saying in TFA, though. It doesn't match what I've heard from him or the W3C before – no one seriously thinks authors shouldn't use widely-implemented things like canvas or video with suitable fallback. It sounds more like an anti-HTML5 smear piece. Paul Krill has apparently written other anti-HTML5 articles.)

Comment Re:W3C is the problem (Score 3, Informative) 205

But the real question is why does it take so long to come up with these standards? HTML5 started by WHATWG back in 2004. CSS3 has been around since 2005. Just get them finalized already. Don't whinge about browsers not fully supporting the standards if you don't give them a fixed document to work towards.

The bottleneck is mostly implementation, not standardization. For instance, Firefox 4 is going to be the first good implementation of HTML5 form enhancements, and those were first standardized in Web Forms 2.0 – in 2003. The spec hasn't changed all that much since then (although it has changed), and has been stable for years, but none of the major browsers gave it high enough priority to implement it well. Browser implementers have lots of things to do, like revamping UI and improving performance and security, and they can only implement so many standards per release. Then, of course, they report back all sorts of problems with the proposed standard, so it has to be changed, then changed again.

So it's mostly a matter of limited programming time, nothing mysterious.

Comment Re:I don't get the point (Score 1) 262

"Hope it doesn't contain vulnerabilities!"

Which is why I added the caveat. In reality, you don't really have to restart the system even on a fatal flaw. Init isn't terribly insecure with the old version if the exploit was a vulnerability in sockets for instance; whereas if there was a socket bug in libc and you were running Apache, sure as hell you want to reload Apache with the fresh version.

Your average sysadmin (or even your above-average sysadmin) is going to be pretty hard-pressed to figure out which services a given library vulnerability "really" affects. Without really understanding the code, it's hard to say. The only safe thing is to restart everything.

Comment Re:I don't get the point (Score 1) 262

Libc is easy. Install the update while the app is running. The old version of the library stays alive in ram as long as processes still have handles to it which is no big whoop unless its an exploit that you really must clean up immediately. If application X uses libc, the next time its started it'll get the new version of the library and happily co-exist with the old one, nay?

Sure. You just have to restart everything using libc, like for instance:

$ sudo lsof -c init 2>/dev/null | grep libc
init 1 root DEL REG 251,0 269825 /lib/tls/i686/cmov/libc-2.11.1.so

Notice how it's deleted, so presumably init is using an old libc version that was upgraded. Hope it doesn't contain vulnerabilities! If you can't tolerate rebooting your system, you probably can't tolerate restarting every single process, either. And if you can leave unpatched libc running in all these daemons, why not an unpatched kernel too?

All that said, it would be nice if distros could apply patches live automatically for the benefit of regular users, who ignore the "please reboot" message (or even just take hours to notice it). At least it will reduce vulnerabilities. But this can't fairly be billed as a way to avoid rebooting altogether, which is how it's often presented: "No More Need To Reboot". Wrong.

Comment Re:I don't get the point (Score 1) 262

Hardware failure and hardware upgrade can be handled by VMWare FT assuming your app fits into 1 vCPU (this will probably be relaxed in the future but I have heard nothing about even experimental support for vSMP yet).

Okay, this is a legitimate point. If you use clustering or something, then you might be immune to most hardware failures. Even if you use regular hardware, if you have enough hardware redundancy you're only subject to CPU/RAM/motherboard failure, and most of that's hot-swappable for upgrades with the right OS. Not perfect, but say planned downtime once per five or ten years for an OS upgrade, probably acceptable. It's possible.

But it's much more common these days to just design systems you can reboot nodes without downtime, so I don't see hot-patching allowing "No More Need To Reboot" except in a very small minority of setups. Better to think of it as a tool to increase security by letting you deploy patches faster.

Comment Re:Scary analogy (Score 1) 262

The difference between mainframes and regular PCs is that one mainframe's role is taken on by many PCs. With proper setup, you should have redundancy between the PCs, so you can reboot them one at a time without affecting service.

This is often impossible considering the workload. This is why you see 32 core servers with many gigabytes of ram.

I run a 16-core server with 16 GB of RAM. I'm going to deploy a second one soon with automatic failover to eliminate downtime for routine administration. It's perfectly possible for the vast majority of services. I'll grant that there will always be exceptions, probably for the most part custom-written applications that weren't designed for redundancy.

You can't do that if you only have only one mainframe.

Yes you can, and that's the whole point.

You can't reboot one mainframe at a time without affecting service, if you only have one mainframe. Rebooting it's going to leave you with no running servers, and it's hard to provide service then. :) Instead, you're forced to design the system so you never have to reboot it, which is much harder.

Comment Re:userspace hotpatching is possible as well. (Score 1) 262

When your kernel needs an update, you use ksplice. If libc needs an update, you hot-patch libc in the same way. "But there's no way to do that!", you say? Actually, there is--it's just proprietary. The place I work at has implemented userspace hotpatching on linux for several architectures.

And for hardware failures? Or critical service restarts? Or a bug causing an OS crash? Put it this way: you can either try to minimize the downtime of each server, or make it so that the downtime of one server doesn't affect service. The former is much more complicated and error-prone, and is still going to fail sometimes, so you need the latter regardless if you're really aiming for reliability. And the latter makes the former unnecessary.

Comment Re:Scary analogy (Score 1) 262

Well let's be honest here, the risk/gain isn't exactly working out for stable enterprise uses.

Exactly backwards.

This feature replicates what mainframes have been doing for years. Specifically because businesses want zero downtime, if possible.

The difference between mainframes and regular PCs is that one mainframe's role is taken on by many PCs. With proper setup, you should have redundancy between the PCs, so you can reboot them one at a time without affecting service. You can't do that if you only have only one mainframe. Even with two, you can only do it if you can afford to double your load. So this might be needed for mainframes, but not for PCs.

Comment I don't get the point (Score 1) 262

Okay, so even suppose this is perfectly reliable. Let's say I'm running a high-availability server and can't stand any downtime. Now when my kernel needs an update, I don't have to reboot, great!

So what about when, say, libc needs an update? As long as programs are still using it, they'll be using the outdated version. Am I supposed to restart all programs using libc? That will cause downtime just like a reboot (although maybe a bit less).

Or what about when I need a hardware upgrade? Or there's a hardware failure? Or what happens when that critical application requiring 100% uptime needs a security fix? What am I supposed to do then?

There's no way to avoid outages completely for any given machine. PC OSes aren't meant for that. Any high-availability service needs to be able to tolerate the failure of any one machine. So why not just reboot it when you get a critical update to the kernel or major system library? That way you know that the machine reboots properly, too.

My suspicion is that this is mainly meant to lure in Linux users who want the "please reboot your computer" messages to go away. But those messages are misleading. If you Ksplice and never reboot, your libraries will remain outdated indefinitely – it's not secure. Distros would do better to ask for reboots only on security updates, and to do so for libraries and running applications (if they can't be easily restarted) as well as kernel updates.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...