Everyone complains about Firefox's automatic updates. I hope they aren't complaining about the actual automatic updates, but instead the way they are done currently in Firefox (most annoyingly, the need to hit UAC to perform the update in Windows). Automatic updates are not bad! Web developers rejoice because of them -- we hope it will prevent stupid users from sticking with an outdated web browser and then complaining about how their websites don't work anymore.
I think we are all tired of supporting five or six versions of X browser. Auto updates mean we target the "stable" channel of each browser. Web standards are much better implemented today, and automatic updates will feed that, too. If automatic updates were bad, Google would have turned it off for Chrome, and Firefox and Microsoft never would have adopted it.
Don't forget that old software has *security vulnerabilities*, you know, if you don't want your personal details splayed across the web for the malicious to pray on. Having automatic updates keeps you protected.
I think your mistaking us for yourself. Even at work with two huge high resolution monitors, I need to maximize the area available for my code. I need to be able to switch from looking at my testing web browser to the one that has my project management tabs, Rdio, Mail, or anything else I might need in a day. And I'd like to see my terminals (multiple, usually one for our test installation and one for production installations) full screen. Having a spatial metaphor for navigating these apps makes it easier to switch between them. I use VirtuaWin at work, which I like because it supports Ctrl+Alt+Arrow to switch between them. Thus I know that from my code, I can hit ctrl+alt+down to hit my terminals. I can use ctrl+alt+right to switch to my "utilities" browsers.
There's not a LOT of stuff I have open, but it's easier to navigate it spatially than it is to constantly navigate Alt+Tab.
That being said, I *far* prefer Linux where I get a fast, glitch-free slide animation. But when I'm on Windows doing real work, the more real estate I can get the better. On a slightly related topic: Why can't I do a half-maximize for a window on the inner edge of my dual monitor set up?
Doing it on the server or in the client is a dumb idea, because you get a lot more overhead on each page load, you have to transmit more bytes,
There is no network / byte transmission overhead when you do it on the server
you don't get to look at the source until you run it through the server
Yes, but the C preprocessor has the same issue as with using a server-side language... I think maybe what you are thinking is doing a "compilation stage" on Javascript, ie writing in nice separated files and then using a preprocessor to create a finalized version, maybe top it off with some minification. I can dig that.
and you don't get nearly as much code reuse.
Hrm, I don't know about that. Choice of preprocessor isn't really going to limit your reusability.
Don't knock the c preprocessor - every programming language could use the same abilities. Especially the crudfest known as java.
I hear that. It's definitely true that Javascript needs some kind of preprocessor since network transfer is so expensive given the architecture.
Oh please don't.
I'm with you on preprocessing being *useful* for Javascript, though certainly far from a *big problem* overall. The biggest issue is doing separation of concerns by *file* since there is no way to import in Javascript. Thing is though is that this might as well be by design since doing such inclusions would cause network overhead. Hence the need for a server-side preprocessor right?
If only we had a PreProcessor which dealt with things like javascript and Html. Like, a Hypertext PreProcessor, or.... a Pre-Hypertext Processor. Or maybe a pretty red gem. I'm just saying, I think there are slightly more appropriate tools to use than the C preprocessor.
That makes my point, doesn't it? It's non-obvious what the code does, even if you're familiar with jQuery.
To really understand it, you need a practical knowledge of jQuery, html, css, and
(Oh, and if you let me make my own versions of the before and toggleClass functions I can do that in javascript in well under 30 lines of much more readable code.)
It is entirely obvious what is happening, because I am educated in using the frameworks we are looking at. I know the DOM, sometimes I have to use it when I can't use jQuery or MooTools or any other framework which has DOM CSS selectors. And that's the important part, DOM CSS selectors, which are brilliantly essential to efficiently writing flexible code in Javascript. Why? Because the structure of an HTML document DOES change, you need a query which will not break when someone sticks another layer of divs beneath the markup you are working with. CSS selectors allows for this. DOM traversal does NOT (yet, please standardize document.getElementsBySelector()!).
I think few of the anti-framework trolls here have delivered large-scale web software to a client before, let alone maintain it for a considerable amount of time. We do exactly that at my job, and jQuery is the LEAST of our maintainability concerns. In fact, its a shining glowy miracle in that it allows us to express our DOM location needs in an easy to understand, flexible way that allows for later changes to the underlying markup without significantly breaking the functionality of the javascript that is in place. The relevant DOM traversals here are expressed on a single line. This is not possible with the DOM proper until document.getElementsBySelector is standardized. And when that happens, yes, it starts to become feasible to skip out on frameworks and just use the standard DOM, but even then you are kidding yourself if you think you won't be writing twice as many lines of code, or just duplicating the same functions that the frameworks already have, except theirs have better code coverage, and much more real world testing than yours do. How egotistical do you need to be before you'll just accept someone else's perfectly good implementation of code you are about to write??
Oh, and No, you can't make your own versions of the functions, that defies the point. Besides, if your idea is so great, how come you haven't released it as open source and come back to Slashdot to troll people into using it because it's so much "better" from an engineering perspective?
The OP has a good question about the best way to use Javascript to facilitate classes and object-oriented design, but instead it's turned into a troll fest for people who are too rigid to understand the defacto industry standard paradigm for the web, simply because it's so different from the languages they work with.
/
|-- etc
|-- usr
| |-- bin
| |-- lib
| |-- lib64...
|-- bin -> usr/bin
That '->' in here? That means a symbolic link (ln -s)
One of the biggest potential hurdles against LSB (and indeed the whole plan) could be
/bin/sh. A lot of shell scripts have #!/bin/sh as their first line (or #!/bin/bash or what have you). If /bin went away, any script with a call like this inside would break. Some scripts might use env to locate the script interpreter, and since env is already in /usr/bin/env, these won't have problems.
I think this guy forgot to read his own diagram... if
Many distributions have reworked the filesystem hugely, with some regulo symlinks on
Certainly '/etc' has no standardized variable or way of accessing it other than just hardcoding in '/etc', but this is why we have symlinks.
I agree with the other posts about keeping the boot volume separate from the non-essential OS code volume (/usr). It is there for a reason, not just compatibility.
Also,
Also whoever had the bright idea to not have */sbin in the PATH was just retarded. I have no idea what their argument is on that.
"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis