Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Extended Support Release (Score 1) 366

Everyone complains about Firefox's automatic updates. I hope they aren't complaining about the actual automatic updates, but instead the way they are done currently in Firefox (most annoyingly, the need to hit UAC to perform the update in Windows). Automatic updates are not bad! Web developers rejoice because of them -- we hope it will prevent stupid users from sticking with an outdated web browser and then complaining about how their websites don't work anymore.

I think we are all tired of supporting five or six versions of X browser. Auto updates mean we target the "stable" channel of each browser. Web standards are much better implemented today, and automatic updates will feed that, too. If automatic updates were bad, Google would have turned it off for Chrome, and Firefox and Microsoft never would have adopted it.

Don't forget that old software has *security vulnerabilities*, you know, if you don't want your personal details splayed across the web for the malicious to pray on. Having automatic updates keeps you protected.

Comment Re:Extended Support Release (Score 1) 366

Google does not limit what extensions are loaded into Chrome. Therefore, anyone can make another ad block extension for Chrome. And Chrome is open source (well mostly), so if Google actually does this, we could fork Chromium and call it a day. I believe ad-free browsing is safe (for both you and me, I have had AdBlock in chrome for a very long time- Slashdot excluded of course :-D)

Comment Re:Extended Support Release (Score 1) 366

So it's clear you run Firefox Nightlies- can I assume you use non-production versions of the rest of your browsers? Might I suggest that this could be the source of your problems? I think some people aren't realizing that pre-production does not mean "latest and greatest"... It just means "latest". Kudos for encouraging people to report bugs though :-D

Comment Re:I regularly (Score 1) 359

Interesting post, though it's worthwhile to note that just because X is on a platform doesn't mean the app will "just work" since X is only the display layer. A lot of under the hood porting will be required -- and in these days where GTK and KDE are already ported to the Windows display framework, it isn't overtly useful (now) to have X on Windows. It would have made the toolkit porting a little easier, sure.

Comment Re:I regularly (Score 1) 359

I think your mistaking us for yourself. Even at work with two huge high resolution monitors, I need to maximize the area available for my code. I need to be able to switch from looking at my testing web browser to the one that has my project management tabs, Rdio, Mail, or anything else I might need in a day. And I'd like to see my terminals (multiple, usually one for our test installation and one for production installations) full screen. Having a spatial metaphor for navigating these apps makes it easier to switch between them. I use VirtuaWin at work, which I like because it supports Ctrl+Alt+Arrow to switch between them. Thus I know that from my code, I can hit ctrl+alt+down to hit my terminals. I can use ctrl+alt+right to switch to my "utilities" browsers.

There's not a LOT of stuff I have open, but it's easier to navigate it spatially than it is to constantly navigate Alt+Tab.

That being said, I *far* prefer Linux where I get a fast, glitch-free slide animation. But when I'm on Windows doing real work, the more real estate I can get the better. On a slightly related topic: Why can't I do a half-maximize for a window on the inner edge of my dual monitor set up?

Comment Re:Going down in flames (Score 1) 575

Doing it on the server or in the client is a dumb idea, because you get a lot more overhead on each page load, you have to transmit more bytes, ...

There is no network / byte transmission overhead when you do it on the server :-)

you don't get to look at the source until you run it through the server

Yes, but the C preprocessor has the same issue as with using a server-side language... I think maybe what you are thinking is doing a "compilation stage" on Javascript, ie writing in nice separated files and then using a preprocessor to create a finalized version, maybe top it off with some minification. I can dig that.

and you don't get nearly as much code reuse.

Hrm, I don't know about that. Choice of preprocessor isn't really going to limit your reusability.

Don't knock the c preprocessor - every programming language could use the same abilities. Especially the crudfest known as java.

I hear that. It's definitely true that Javascript needs some kind of preprocessor since network transfer is so expensive given the architecture.

Comment Re:Going down in flames (Score 1) 575

Oh please don't.

I'm with you on preprocessing being *useful* for Javascript, though certainly far from a *big problem* overall. The biggest issue is doing separation of concerns by *file* since there is no way to import in Javascript. Thing is though is that this might as well be by design since doing such inclusions would cause network overhead. Hence the need for a server-side preprocessor right?

If only we had a PreProcessor which dealt with things like javascript and Html. Like, a Hypertext PreProcessor, or.... a Pre-Hypertext Processor. Or maybe a pretty red gem. I'm just saying, I think there are slightly more appropriate tools to use than the C preprocessor.

Comment Re:Going down in flames (Score 1) 575

That makes my point, doesn't it? It's non-obvious what the code does, even if you're familiar with jQuery.

To really understand it, you need a practical knowledge of jQuery, html, css, and ... (wait for it) ... the hard parts of javascript that you're trying to avoid learning about and using in the first place!

(Oh, and if you let me make my own versions of the before and toggleClass functions I can do that in javascript in well under 30 lines of much more readable code.)

It is entirely obvious what is happening, because I am educated in using the frameworks we are looking at. I know the DOM, sometimes I have to use it when I can't use jQuery or MooTools or any other framework which has DOM CSS selectors. And that's the important part, DOM CSS selectors, which are brilliantly essential to efficiently writing flexible code in Javascript. Why? Because the structure of an HTML document DOES change, you need a query which will not break when someone sticks another layer of divs beneath the markup you are working with. CSS selectors allows for this. DOM traversal does NOT (yet, please standardize document.getElementsBySelector()!).

I think few of the anti-framework trolls here have delivered large-scale web software to a client before, let alone maintain it for a considerable amount of time. We do exactly that at my job, and jQuery is the LEAST of our maintainability concerns. In fact, its a shining glowy miracle in that it allows us to express our DOM location needs in an easy to understand, flexible way that allows for later changes to the underlying markup without significantly breaking the functionality of the javascript that is in place. The relevant DOM traversals here are expressed on a single line. This is not possible with the DOM proper until document.getElementsBySelector is standardized. And when that happens, yes, it starts to become feasible to skip out on frameworks and just use the standard DOM, but even then you are kidding yourself if you think you won't be writing twice as many lines of code, or just duplicating the same functions that the frameworks already have, except theirs have better code coverage, and much more real world testing than yours do. How egotistical do you need to be before you'll just accept someone else's perfectly good implementation of code you are about to write??

Oh, and No, you can't make your own versions of the functions, that defies the point. Besides, if your idea is so great, how come you haven't released it as open source and come back to Slashdot to troll people into using it because it's so much "better" from an engineering perspective?

The OP has a good question about the best way to use Javascript to facilitate classes and object-oriented design, but instead it's turned into a troll fest for people who are too rigid to understand the defacto industry standard paradigm for the web, simply because it's so different from the languages they work with.

Comment Re:Going down in flames (Score 1) 575

> Well, it is unsafe, is anti-reusability, unclear, and impossible to debug without rewriting it to see why it doesn't work or suddenly stopped working. Defining a function inline to your statement, and in that function performing evaluations in your return call that are based upon objects that may be null when you execute an operation on them?

$("#showhidecontrol") --- This call might not find the id="showhidecontrol" element. In which case it returns an *empty jQuery object*, not NULL. newToggleElement then would be a clone of the empty jQuery object, so its ok to use .click() on it (verified in the console). So this code is safe, and will *not* cause Javascript errors assuming jQuery has been loaded (safe to assume).

It is unclear because you do not understand the tools. I use jQuery every day and yes, it is absolutely better than stock Javascript (I know and understand the standardized DOM very well).

Reusability: This code is itself a function, the constituent parts of it do not provide a huge return in terms of reusability. Let's look at what happens if we define the lambdas here as non-inlined functions.

function hideableClickBehavior () {
// oh shit, what is hideableElement here? It (potentially) has no relationship to the element we are acting on!!!!
// hideableElement.toggleClass("hidden");
}

function applyHideableBehavior() {
var hideableElement = $(this); // wtf is "this", it makes no sense in this context now, but still works and is correct!!
var newToggleElement = $("#showhidecontrol").clone();
newToggleElement.click(hideableClickBehavior);
return newToggleElement;
}

$(".hideable").before( applyHideableBehavior );

What this produces is NOT reusable code, it is less clear code where the parts of the operation are not clear. Certainly other things CAN be done to what I have above to make it more "reusable", but at what benefit? If the app is designed correctly, there should be no other binding between an element event and the applyHideableBehavior() function, because this behavior is supposed to be unique. If you want reusability, REUSE THE 'hideable' BEHAVIOR, dont just blindly apply principles of reusability and expect it to give you a benefit in your software engineering endeavors.

Now stop trolling about stuff you don't understand.

Comment Ahmmm (Score 2) 803

/
|-- etc
|-- usr
| |-- bin
| |-- lib
| |-- lib64
...
|-- bin -> usr/bin

That '->' in here? That means a symbolic link (ln -s) ...

One of the biggest potential hurdles against LSB (and indeed the whole plan) could be /bin/sh. A lot of shell scripts have #!/bin/sh as their first line (or #!/bin/bash or what have you). If /bin went away, any script with a call like this inside would break. Some scripts might use env to locate the script interpreter, and since env is already in /usr/bin/env, these won't have problems.

I think this guy forgot to read his own diagram... if /bin is symlinked to /usr/bin, the scripts will work perfectly fine. This article is a little stupid, imho. In all fairness, the guy does realize that there are workarounds, but this paragraph and the one after talk about complete non-issues.

Many distributions have reworked the filesystem hugely, with some regulo symlinks on /usr, /tmp or where-ever to deal with code that hardlinks these locations (which, in most cases should never be required, because you should be accessing 'bin' directories via PATH, and 'lib' directories via LD_LIBRARY_PATH... and 'tmp' directories using the stdlib functions (or a language-specific wrapper).

Certainly '/etc' has no standardized variable or way of accessing it other than just hardcoding in '/etc', but this is why we have symlinks.

I agree with the other posts about keeping the boot volume separate from the non-essential OS code volume (/usr). It is there for a reason, not just compatibility. Also, /sbin has always been stupid so good riddance there. Separating super user executables from normal executables helps nothing in 99% of cases. What matters is the write and execute privileges on the actual executables, not the directory. I believe the argument is that if you want to have tiered write access amongst non-root administrator accounts, with administrators only able to write to /usr/bin but not /usr/sbin, you can do it with SLIGHTLY less work. But really- if everything is written PROPERLY, a simple change to the system's PATH variable, and some time moving the right executables into the more restricted directory will yield an identical result without the clutter by default, and the need for people to explain what sbin is to newbie users.
Also whoever had the bright idea to not have */sbin in the PATH was just retarded. I have no idea what their argument is on that.

Comment Re:False assumption (Score 1) 814

I agree on spaces being antisocial. However the size of the tab stop affects the alignment of the code. If you write in with 4-space tabs and then view it with 8-space tabs, things will be out of line. Not to mention occasionally you need to line up to something not on a tab boundary, and if you use spaces it will gunk it up more.

Slashdot Top Deals

"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis

Working...