Link to Original Source
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Link to Original Source
I don't really care if it's served as XML or not, the point is that if it's not well formed XML it becomes a massive ballache to deal with, because XML tools and libraries are so prevalent.
It's only syntax, it shouldn't be a big deal. There's plenty of XML-based tools that are useful, and HTML5 goes to some lengths to define the text/html (i.e. non-XML) syntax so you can still use those tools and just translate the syntax at the edges.
The text/html and XML syntaxes are based on exactly the same underlying conceptual model (the DOM tree), so you can switch without any radical changes. E.g. the validator.nu HTML5 parser implements the same APIs as standard XML parsers - drop it in front of your existing XML tools and libraries, stick an HTML serialiser on the other end, and your system can work pretty much the same as before (with the bonus of working for any arbitrary page on the web, not just the tiny fraction that are well-formed XML).
The ethos surrounding HTML5 is that well, lots of old sites didn't follow newer standards, so lets make those web sites standard by taking everything they did shit, and making that standard.
Who is helped by a standard that almost everybody ignores? If you, say, want to write code to parse HTML pages, and you try to implement what HTML4 specifies (based on SGML), your code will be pretty useless because HTML4 is incompatible with reality and you'll get incorrect output most of the time (stray characters, incorrectly nested elements, half the page text disappearing inside a misparsed script element, etc). Similarly if you implement what XHTML specifies, you'll fail since most pages aren't well-formed XML. You can declare that those pages are broken and non-standard but that doesn't stop them from existing and being a serious problem for anybody writing software that interacts with the web.
Nowadays you can just implement what HTML5 specifies (or find a library that already does it), and your parser will work identically to the current or near-future versions of all major browsers - it's defined in enough detail that there's no ambiguity in how to process any stream of bytes. That's never been possible before, when the standards were focused on some vision of a simple coherent syntax and refused to deal with the messy details that are critical in real life.
If you want to document a set of best practices for writing HTML, with rules for lowercase names and closing tags and quoting attributes and for indentation etc, that's fine and would be nice (especially if you could find a way to motivate people to follow the best practices - a decade of promoting XHTML doesn't seem to have stopped people writing terrible code so we need a better way). Meanwhile, HTML5 is solving the harder problem of how to cope with people who ignore those rules.
SpiderMonkey uses 64-bit value types on all architectures (x86, x86-64, ARM, etc), storing either a 64-bit float or a 32-bit int or a pointer (31 bits on 32-bit, 47 bits on 64-bit), so it shouldn't make any difference to their memory usage. (The non-float values get packed into the range of unused NaN float representations, to avoid ambiguity). I think other modern JS engines do pretty much the same thing. JS semantics are that numbers are 64-bit floats, so implementations couldn't really use 63-bit ints (too precise) or 32-bit floats (too imprecise) anyway, though 32-bit ints are a safe optimisation.
Gazelle is from Microsoft Research, and their paper discusses the details of the security model - it's not just a marketing claim.
The idea is that every 'origin' (basically a domain name, which is used as the basis for access control in all modern browsers) is separated into its own sandboxed process. If a page on your domain embeds an iframe from an advertiser's domain, the iframe is rendered in a separate process, and all communication is handled through a Browser Kernel which enforces the security constraints (e.g. preventing the advert from touching or rendering anything outside its iframe box, even if an attacker can find a way to execute arbitrary code in it). Plugins are handled in the same way.
Chrome's security model doesn't handle that kind of separation of multiple sites within a single page. But Gazelle sacrifices some backward compatibility (e.g. it removes the document.domain attribute, and it requires all plugins to be rewritten to use the Browser Kernel instead of directly accessing the network or filesystem), which is unlikely to be acceptable in practice.
And Gazelle is certainly not a replacement for the IE engine - it's built on the existing IE7 components for parsing, rendering, scripting, etc. It's research, and the value is its ideas, some of which could perhaps be integrated into current browser engines to improve security. It's not meant to be a real browser engine, but it seems successful as a research experiment.
there was a pretty good David Attenborough programme on BBC TV last week about Darwin and Evolution that showed many of the subsequent discoveries
There's also an interesting quote from David Attenborough in response to people asking "why he did not give "credit" to God" for the subjects of his nature documentaries:
They always mean beautiful things like hummingbirds. I always reply by saying that I think of a little child in east Africa with a worm burrowing through his eyeball. The worm cannot live in any other way, except by burrowing through eyeballs. I find that hard to reconcile with the notion of a divine and benevolent creator.
My father wouldn't let me read this because it's somewhat anti-feminist.
"Somewhat"? In Flatland, the social status of men is proportional to their number of sides (triangles are the lowest class, and priests are nearly circles); women are even lower, being straight lines. Women are not allowed to walk in public spaces without swaying and emitting noises, so that men do not accidentally get impaled on them. They have to enter their houses by the back door. They are considered "wholly devoid of brain-power", driven by emotion and instinct and lacking memory, and they receive no education.
But it's social satire, not a reflection of the author's views. He was "a firm believer in equality of educational opportunity, across social classes and in particular for women", and the book is attempting to highlight a Victorian mindset that was still prevalent at that time. The women in the book act in far more complex ways than their men give them credit for. The author even says "To my readers in Spaceland the condition of our Women may seem truly deplorable, and indeed it is" - he's not happy with how they're treated, and readers in Spaceland will hopefully see that it's caused by the absurd class system holding them back, though the narrator can't avoid falling back into the prejudices of his society.
The book makes more sense when you understand the context. The Annotated Flatland is quite interesting, providing some background on the author and mathematics and the society of the time.
("more sense" doesn't mean it actually does make sense - it all still seems a bit muddled to me, with a random mixture of physical differences and social differences between people, and strange science (like Lamarckian evolution where the actions of a parent affect the number of sides (hence social status) not of themselves but of their offspring), and sections that I don't understand the point of (like the whole thing about colour being discovered and then banned - it makes sense within Flatland but is it meant to be satirising anything in real life?). Much of it is probably because the world has changed so drastically in 125 years that I just can't understand where the author was coming from. But it's an interesting book despite (or perhaps because of) that.)
The Akron Beacon Journal is reporting that the trial of the three election workers accused of rigging the 2004 presidential election recount in Cuyahoga County is finally underway. As you may recall, this was the case where poll workers "randomly" selected the precincts to recount by first eliminating from consideration precincts where the number of ballots h
Recorded on telemetry tapes, they are said to be the best quality images of the landing (unconverted slow scan TV) yet to be seen by a public still fascinated by the early space race. These tapes were mislaid in the early 1980s on their way to NASA's Goddard Space Flight Centre in Greenbelt, Maryland."