Do not confuse open formats and open source software. These are 2 different things.
Most of the HTML5 specifications gets developed here first:
Then eventually after a long process will end up here:
However Picture-tag actually came from the community first, not the W3C or the vendors directly:
http://responsiveimages.org/ only later did it become http://www.w3.org/community/re... and later became part of the HTML5-specification.
What probably happens is that a big site says: we use CA 1 and CA 2.
Then uses CA 1. After that when CA 1 is somehow a problem they switch using certificates from CA 2 they have already prepared and ready for use.
Empty list also need to be signed. So no.
The electrolysis project is scheduled to go into the stable release at the end of this year. If it will be enabled by default this year I don't know. My gut feeling is they'll do so early next year.
You can also configure every browser to use a proxy-server and then block all the other webtraffic at the firewall.
You mean what corporate networks are doing is wrong. That is the biggest flaw.
They should move to a model of a proxy configured in the browser. The browser then can trust the proxy.
It's a bit more complicated.
The big standards organisation is W3C. They only call it a standard after everyone agrees on what the standard is and there are implementations in the field that prove that the model works. In that sense they are a bit like the IETF. Part of the IETF motto (TAO): "We believe in rough consensus and running code".
So in the case of HTML5, all browsers will implement the parts of the HTML5 they want to first and only when there are multiple implementations of a feature/part of the HTML5 standard, everyone agrees on what that part of the standard should look like and the documents are ready will W3C rubber stamps it a standard.
So you can already use it before it is a standard. Most parts, by now probably pretty much all of it, of the specification is stable. They are just changing documents to improve working and adding clarifications.
Using the implementations is actually encouraged, because the vendors want to see how it is being used to know if the specification actually works in the way it was intended. Or if it is just to complicated to work with.
Then you have the WHATWG, which is a number of browser vendors (Mozilla, Opera, Microsoft, Webkit/Blink) sitting together creating new HTML5 ideas and standards documents. Those standards documents can eventually be used as a basis by the W3C.
The WHATWG was formed when the W3C, a long time ago, said: all HTML will be XML based in the future. And basically said: HTML is a document format. The WHATWG said: no, way. Let's start a new group of people, because we don't want to deal with strict XML and we actually make it possible to let the web be an application delivery platform.
So really HTML5 pretty much is done. All the browser implementations are done, except for support of certain parts or features.
HTML 5.1 is just a working document title. It is just a set of new features being added to HTML 5 which will end up as part of HTML5.
Fun fact about the picture element is: it did not come out of the WHATWG or W3C or browser vendors, it came out of a community of webdevelopers to create 'responsive images'. A problem that didn't have a good solution yet.
Responsive images is about downloading a the right size of image based on the device it will be displayed on.
Well, there is a whole lot of Java in the enterprise and other organisations like banks.
Obviously they are always last to move, because they have a lot of legacy applications anyway.
No, dynamically is definitely the wrong approach. It just won't work.
Shippping LLVM byte code could still be possible yes.
Does any distribution have some kind of package that can be installed ? llvm-runtime. Like you can install Python or Java.
Shouldn't be to hard to make a package for Linux for that, right ?
Maybe someone could even add it to the kernel so it can recognise the bytecode.
Maybe it is just me but when I see these things, I sometimes get crazy ideas. And I think:
Might as well translate into LLVM bitcode and recompile the code:
Hell, maybe it's even faster if you compile the LLVM bitcode with emscripten and use asm.js to run into the browser.
We are not disposable blue collar idiots. We are white collar professionals and we just want the same damn respect accountants, other dept managers, other educated employees and even secretaries get within the same organization.
If you are not in the 1%, then you are one of the rest.
Some might get a bit more, some might get a bit less.
That's all there is to it.
The problem is context.
On one site the same content is illegal on an other it is legal.
Why do people think "virtualized computing" is cloud ? It isn't. Because a VMWare cluster isn't cloud.
Cloud has characteristics like:
- pay per use
- API to control it, so it can be automated
- a failure model, like availability zones. So you know that things are 100 % seperated so if one AZ goes down an other AZ does not depend on it.
Nobody says it has to be virtual either, you can get physical machines from Rackspace or Softlayer.
Number of birds killed for human consumption ?