Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Plugin Support (Score 1) 236

Something I discovered quite by accident, while troubleshooting extension issues here (still on 3.6), is that if I start in safe mode, just long enough to popup the safe mode dialog then cancel it out (or kill it) so not even starting the browser proper, then things "magically" work the next session. But the session after that, back to crashing.

So I rigged up a bash script that starts firefox in safe mode, sleeps a couple seconds, kills it, then starts firefox normally, with the URL I intended to go to. I put the script first in my path so it gets run before the normal firefox binary, and everything works fine beyond a couple seconds additional delay and a short blip of the safe-mode dialog before it gets killed. (That would be seriously irritating if FF was my main browser, but I use kde and thus konqueror for that. Mainly, I generally end up starting up FF on sites not yet setup properly for scripting in konqueror, since konqueror has similar site-level scripting permissions to noscript, but no way to see what domains the page is trying to pull scripts from without diving directly into page sources! =:^( So konqueror gets used on my normal sites including /., while firefox gets used on a reasonable amount of the opportunistic visit sites.)

Comment Re: Download Helper not free as in freedom (Score 1) 236

Unfortunately, Download Helper wasn't freedomware the last time I looked. It was off my system within days of realizing that, as if I wanted to run servantware, I'd have not dumped a decade of MS experience to switch to Linux, now nearing a decade ago (when it became apparent what MS was doing with eXPrivacy; I spent some time preparing and then started my final switch to Linux the weekend eXPrivacy was released).

If I want to bother, often, one of the other available tools does the job. If not, well, watching that video wasn't that important after all.

Comment Re:Okaaaaaay... (Score 4, Informative) 64

Well, there's the proprietary drivers which AMD/ATI does what they want with, dropping support for old chips, etc, and there's the native xorg/kernel/mesa/drm and now KMS drivers, which are open. The open drivers support at least as far back as Mach64 and ATIRage, and while I never used those specific drivers, after I realized what a bad idea the servantware drivers were based on the nVidia card I had when I first switched to Linux, I've stuck with the Radeon native drivers. In fact, I was still using a Radeon 9200 (r2xx chip series) until about 14 months ago, when I upgraded to a Radeon hd4650 (r7xx chip series), so I /know/ how well the freedomware support lasts. =:^)

And why would the free/libre and open source (FLOSS) folks build a shim for the servantware driver? The kernel specifically does NOT maintain an internal kernel stable ABI (the external/userland interface is a different story, they go to great lengths to maintain that stable), and if anyone's building proprietary drivers on it, it's up to them to maintain their shim between the open and closed stuff as necessary. Rather, the FLOSS folks maintain their native FLOSS drivers.

And while for the leading edge it it's arguable that the servantware drivers are better performing and for some months may in fact be the only choice, by the time ATI's dropping driver support, the freedomware drivers tend to be quite stable and mature (altho there was a gap in the r3xx-r5xx time frame after ATI quit cooperating, before AMD bought them and started cooperating with the FLOSS folks again, part of the reason I stuck with the r2xx series so long, but those series are well covered now).

So this /is/ good news, as it should allow the freedomware drivers to better support hardware video accel, as they merge the new information into the freedomware drivers.

Comment Re:Connection, yes. Server, no. (Score 1) 235

The only thing NAT gives you that a default policy of REJECT or DROP doesn't is extra latency and higher CPU load on the firewall.

Not exactly. Someone above argued, quite persuasively I might add, that had IPv6 been the norm before we got broadband, the differences in firewalls and how they are configured would have had support people simply saying "shut off the firewall", and people would, and it'd work, and they'd never turn it back on. With NAPT OTOH, once there was more than one computer behind it, shutting off the NAPT really wasn't an option at the consumer level, so application writers and their support people had to learn to deal... and that's just what they did! Meanwhile, NAPT got smarter as well, with various auto-configuration protocols like UPnP, etc. None of that would have happened if it was as simple as telling the user to shut off their firewall, and magically, everything worked.

See above for more. The post is high scored, and he said it better than I.

NAT also makes it harder to figure out who the badguy is if one of the internal machines attacks a remote machine (for example, because it got a virus or some employee is running something they shouldn't be).

Actually, that's a good part of the point. Behind the NAPT is private network, for those that run it to manage. It's nobody's business in front what the network behind the NAPT looks like, nor is it their business to trace the bad guy beyond the NAPT. They trace it /to/ the public IP, and take appropriate action from there. Said action should be contacting the administrator for that IP (or IP block, more likely), and letting them deal with it. If they don't, then stronger action, such as blocking the entire IP or IP block may be warranted. However, it's nobody's business but the folks behind and running that NAPT, what or who was doing whatever, because that's a private network. The NAPT therefore functions in much the same way as the NID in the POTS network. Beyond the NID, it's up to the owner. Beyond the NAPT, it's up to the owner. Let them worry about it, and react if necessary based on whether they do or don't stop the abuse from emanating.

Comment Re:Huh (Score 1) 297

Good points. That's beyond what I (think I) know, but it's certainly interesting. I'd love to see Stallman's take on it, or [the name fails me, was the FSF lead lawyer for awhile but I think he has stepped down from that, I see his picture in my mind, but am about ready to go to bed and don't want to go looking for the name...].

Comment Re:Huh (Score 1) 297

As in, put the sources on the same disk as the binaries? =:^)

To the best of my knowledge, no, no "force" is necessary. However, the sources do need to be not just available at the same time, but with a reasonably prominent message saying they are. IOW, if the LiveCDs are out for the taking, there needs to be either another stack of source CDs, or at least a sign beside the LiveCDs saying ask for a source disk and we'll burn one. It can't be simply up to the passer by to ask, without some reasonably sized sign saying to ask if you want a sources disk too, or it falls back to the notice on the binaries disk (which you better be sure and have included), thus triggering the 3-year clock.

Similarly, if there's a nice, prominently placed link on a web site to download the LiveCD, the link to the sources can't be hidden somewhere in the terms and conditions links at the bottom, or it falls back to the provide on request for at least three years clause. Neither does a link to the sources repository in general cover it, AFAIK. It has to be a link to the specific sources used to create the binaries on the distribution media, and it needs to be reasonably visible and logically attached to the link to the binaries. The links to the sources should be placed logically similarly to the links to the LiveCDs, say the next item in the menu, or once you click on the LiveCD link, it takes you to a lander page which for example lists the sources as one choice available among all the supported archs in either a list of links, or a drop-down menu.

Of course, browsing the FTP or HTTP files server itself, none of that is usually a problem (as long as the sources are available at all), because at that level, it's all a bare listing of directories and files anyway, and a logical directory tree makes it easy enough to find the sources. But besides being a mark of good community relations, this clause is behind the big distributions' policies of making srpms, etc, as available as the rpms that far more people use. And the whole policy, while it might seem niggling in its requirements at times, is one of the big reasons the Linux community is as active and vitally healthy as it is. Take away that access to the sources, or even simply make them harder to get (by encouraging the mail a request model, now discouraged by that three-year clause), and you quickly choke the vitality that is the ever living growing changing Linux community.

Comment Re:Huh (Score 2, Informative) 297

So if sell someone a box with a linux distribution installed on it do I need to print out all of that distribution's source code and ship it with the computer as well?

You don't need to print it out. In fact, that would be discouraged and may not meet the requirements of being in a customary format (too lazy to go look up the specific GPL wording ATM, but electronic format would be encouraged, dead tree format discouraged as it has to be converted back to electronic format for use), today. You do, however, need to make the sources available -- and no, pointing at upstream doesn't suffice, except "in the trivial case". (Again, I'm not going to go look up the details, but the idea is that individual users can share say Ubuntu and point to Ubuntu for sources, but commercial distributors and the like must make them available themselves.)

With GPLv2 (which the Linux kernel uses), you have two choices. If you provide sources at the time of purchase, say, throwing in a CD/DVD with the sources on it, you're covered, and don't have to worry about it when you quit distributing the binaries. Similarly with a download, if you provide the binaries and an archive with the sources at the same time, you're fine. Similarly if you distribute disks at a FLOSS convention or the like. Have a stack of disks with the binaries and another with the sources (or a computer with a burner setup to burn a disk of the sources on demand, assuming there'll be less demand for that than the binaries).

Alternative two is to include a notice /with/ the software offering the sources for no more than a limited handling fee, to cover the costs of providing them (and this cost may be audited if you decide it's several hundred or thousand dollars...). *HOWEVER*, if you do this, the offer of sources must be valid for (I believe it's) three years from the time of distribution of the binaries -- thus, you must keep the exact sources necessary to build the binaries as distributed (not an updated version of those sources) available for three years BEYOND the point at which you stop distributing the binaries, so users who get the binaries the last time they were distributed have sufficient time to decide they want the sources, and (covering the user-trivial case) so they can pass on disks using you as an upstream sources provider, provided they don't have to be a provider themselves.

It is the three-year requirement in this choice that sometimes ensnares distributions that are otherwise playing by the rules, as many don't realize that if they only include the offer for sources, it must remain valid for three years after they've stopped distributing the binaries. Take a distribution that may be distributing historical versions of their LiveCD, for instance, from say 2003. As long as they are still distributing that LiveCD, AND FOR THREE YEARS AFTER, in ordered to comply with the GPLv2, they must make the sources used to compile the binaries on that CD available, as well. So if they're still offering that historical 2003 LiveCD in August 2009, they better still have the sources used to create it available thru August, 2012, or they're in violation of the GPLv2 that at least the Linux kernel shipped on that LiveCD is licensed with. It's easy enough to forget about that part and thus be in violation, when they can't provide sources for the binaries that were current way back in 2003, but that they may well still be making available "for historical interest" on that 2003 LiveCD.

One many distributions become aware of this catch, they make it policy to ensure that they make available the actual sources at the time of distribution as well as the binaries, instead of simply providing the notice that people can request them later, so they don't have to worry about that 3-year thing.

Of course, if an organization is on top of things, and sets it up so they're tarballing all the sources for a particular shipped version and indexing them by product release and/or model number (and perhaps serial number as well, if they do in-line firmwaire updates), then the cost and hassle of archiving said sources for compliance on a product originally shipped in 2003, if it was still available in 2009 so must have sources available in 2012 for compliance with the sources notice provision, will all be fairly trivial. But they have to have thought that far ahead and be actually archiving things in a manner so they can retrieve them as used for a specific product they shipped, not merely the latest updated version thereof.

If I make software that runs on a linux distribution and set linux to run that software at boot-up does that mean I'm really altering linux itself?

It depends. If all you did was alter an initscript, then ordinarily no, tho if you're shipping even unmodified GPL binaries (including the Linux kernel), you likely need to ship sources for them anyway. If you modified the Linux kernel sources directly, or if your software is a Linux kernel module, therefore running in kernel space not userspace, then that's kernel modification and you'd be covered under kernel modification rules.

Do note that the Linux kernel is a special case in a number of ways, however. First, the Linux kernel license specifically disclaims any claim on derivation for anything running exclusively in userspace -- using only kernel/userspace interfaces. The userspace/kernelspace barrier is declared a legally valid derivation barrier, so if you stick to userspace, you are defined as not being derived from the kernel, and the kernel's GPL doesn't apply. Second, the global kernel license is specifically GPLv2 only, so GPLv3 doesn't even enter the equation as far as it's concerned.

Of course, if your software runs on Linux in userspace, it's very likely linked against some very common libraries, including glibc. Of course glibc and most if not all of the most common core libraries are LGPL, not GPL, so unless you modify their code, you're probably safe there. But that may or may not apply to other libraries you may have linked to.

All that said, assuming your software isn't linking to anything it shouldn't if it doesn't want to be GPL encumbered, simply setting up an initscript or the like to invoke it on boot, or even setting it up (using a kernel commandline parameter) to load in place of init, isn't by itself likely to trigger the modification or derivation clauses for anything GPL that also might also be shipped together with it.

But, for commercial Linux based hardware or software products, that still doesn't get one away from having to make available source for any GPLed products you distributed. The one exception to that, that I know of, is if you burn it into ROM, such that neither you nor the customer can upgrade it without physically replacing that chip, THEN you can ship GPL based hardware /without/ having to ship sources. But if it's firmware upgradeable at the factory, then sources must be made available.

Comment Re:Pygmalion Effect (Score 1) 98

That was /close/ to my feeling as well, only...

What has surprised me is that nobody is congratulating these guys on being great capitalists! They've found a very clever way to separate people from their money, while providing at least /some/ real value, so they're not so quickly seen as simple scammers.

Here's what I see happening, and what it seems most have missed, here. They're doing a several day summer camp -- only they're selling it as MORE than that -- and, if their advice isn't taken to extremes, it won't do any harm (the kids are having a good time at camp, according to TFA), and /could/ be somewhat beneficial. The genetic testing and aptitude evaluation is simply their hook, something no other camp has, thus the reason to choose them (and thus the reason they can charge more) as opposed to the others.

Think of it this way. Yes, there's not much positive predictive value in those genetic tests... yet. However, what they're /really/ doing is observing the kids during the camp and seeing what they like to do, if they're more interested in sports or computers, video games or crafts, how outgoing they are, etc. This very likely forms the basis for the majority of their guidance.

But more or less any camp can do that, if they aren't already in a one-child thus very competitive society, with a bit more training and observation. If one camp's able to charge extra for it, pretty soon they'll all be doing it and the one won't have its formerly unique hook, any longer.

So, they throw in the genetics testing as well. Probably they screen for a few simple things, Down Syndrome, etc, even tho if something was really wrong it would likely be obvious from the behavior. But this way if something shows up, they can throw it in too. Otherwise, not so much. But they /did/ mention screening for known height genes for, say basketball players. Yeah, food and environment plays a big part too, but again, they're observing the kids at camp for several days too, and if they're already fat slobs at 3-5 years old, they're probably not going to emphasize basketball even if the kid /does/ have a height marker or two.

So anyway, at the end of the five-ish days, they have a fair idea what the kid is interested in, and probably emphasize that. Then they throw the genetic stuff in for good measure, but very possibly fill in a lot more than is actually there, much like a crystal ball or palm reader, by simply being observant and basing their "genetics report" on that.

So the kids have a great time at camp. The camp staff observe them and tell the parents what the kids enjoy. The parents are happy to pay for the camp and service and the kids are happy with it too, and the camp has its hook to set it apart from all the other camps out there and make some extra money in the process!

Then, as you said, the Pygmalion effect kicks in, and since the parents got told that the kids are good at what they enjoy in the first place, they're encouraged to do what they already enjoy, and "what-ju-know", a few years down the road, the camp has a name for itself for predicting so well! =:^)

Of course, at 3 years old, there's probably a limit to how precise they can get in truth, but they may be able to give some guidance, and I'm sure they can convince parents who already want to believe that there's a good reason to bring the youngest back in a few years, perhaps getting them three times in the 9 years 3-12 that's covered, or even every year, for fine tuning as they grow!

So yes, it /could/ be bad, but I don't necessarily see it as bad given what was presented. It looks much more like just another hook, another unique bit of sales material that sets them apart from the other camps, and particularly since it /is/ coupled with the behavioral observation at a several days' camp, and because it /is/ a camp the kids will (and do, according to TFA) enjoy, I see it as likely just as beneficial as any other summer camp could be -- at minimum -- and potentially a bit more so -- again, PROVIDED the parents don't get psychotic about it, but then, if they were going to do that, they'd likely be doing it anyway, and there's at least SOME chance that at minimum, the kids get away from that over-driving parental force for a few days, and MAYBE, just MAYBE, the camp's advice will be able to steer the parents into at least driving the kid toward something they want, not something the parents THINK they should want.

So at least given what's in the summary and in the article, I see no reason why this has to be bad at all -- in fact, it could have pretty good results. Of course, there's no way we have enough detail to know for sure either way, but it's certainly not necessarily the doom and gloom that so many folks here seem to think it is, at least not based on the facts as presented in either the summary or TFA itself.

Comment Re:Umm... (Score 1) 585

Good point, and I agree it was a bit confusing, but I had in mind that use requires copying -- and note that one court in the US at least has held that even the copy from storage to memory in ordered to run is covered by copyright, so running directly from a purchased CD (well, unless you're running some sort of XiP, execute in-place, technology, that doesn't even copy it to memory, as happens with some flash based embedded systems, for instance) without any copying to the hard drive or other media is not necessarily a way out, either.

Now, at least in the US, there's the right of first-sale, aka exhaustion rule, but even there, the original user must have been granted permission to possess and use a copy before they can transfer that permission to someone else. Thus, what I was really thinking when I said "use" was that until purchase or other grant of permission, copyright forbids use, especially since that use must involve the procurement of a copy in some way or another.

Once there is legal permission to legally possess a copy, purchased, downloaded, obtained by mass mailing as with AOL disks, whatever, you are correct, actual usage of that copy isn't restricted save for making another copy (but see above where some judge ruled that simply copying it to computer memory in ordered to run or play it is creating a copy, OTOH, this would be an implied permission provided you've legally obtained the work in the first place... and in the US, the implied permission applies to copies made for backup to, I believe). But the author's/owner's permission must be obtained in ordered to legally get the copy in the first place, and that involves copyright, was my point.

But I really should have been more careful in stating that point. Regardless of what I had in mind, your calling me on what I actually wrote was certainly valid given that it didn't actually express what I had in mind. So thanks for bringing up the point. I'm glad someone's on the lookout for such omissions and doesn't hesitate to point them out! =:^)

Comment Re:Umm... (Score 1) 585

Addressing the last bit, about the user, first, while anyone can make a claim, it's extremely unlikely that a claim against a mere user based on violation of the GPL would stand up in court, because the intent (and wording, it seems to me, but IANAL) is clear that a user may do what they want, including modification, linking with whatever proprietary program, etc, as long as they don't distribute the result. Thus it is that mere users of proprietary kernel modules don't have to worry about linking them into the kernel, but distributions have to be very careful about what they distribute, and of course the authors of those proprietary kernel modules have something to worry about, with the trend toward stricter and occasional threats of legal action but nothing serious yet.

Thus while you are correct that such a thing would cause an uproar in the community, it's not realistic that the case against said user would get very far -- and I expect the FSF would end up on the user's side against whatever "insane" author, as well.

As for the coder and linker against the GPL library, ultimately, that comes down to how derived (from copyright code, US code in consideration for GPLv2, more general international code for v3) the new code is found to be. For static linking, the answer is pretty simple as the GPL library will then be shipped as part of the binary. For dynamic linking, the case is blurrier. However, answering your question, consider that the headers needed to compile the code against the library can be argued to be part of the GPLed code as well, tho with some limits as to the extent that can be taken (I know there's limits there but won't attempt to define them as I'm a bit vague on that end of things myself). Using that definition, anyone using the headers to link against would be creating derived code.

But perhaps that wouldn't ordinarily be seen as a correct interpretation. Still, by downloading and using the libraries and their headers on a system, thus making copies of them, something forbidden by copyright law without the code owner's permission (as here granted by the GPL), a developer will be going beyond mere copyright. Again, for a mere user that's not a problem, as the GPL is explicit in unconditionally allowing that, and a mere user doesn't even have to agree to it either, as the license at that point is covered under the automatic grant to recipients under the same conditions clause. But it's the GPL that allows it, absent other permission, since copyright code forbids it, and once someone starts distributing it (or a derived work, see below), then they must accept the GPL themselves in ordered to do so.

Further, just doing the development and not distributing anything isn't a violation. But as soon as the developed code linking to that GPL library is distributed, THEN the derivation question comes into play.

And again, under ordinary circumstances, that's a logical and legitimate question, one that hasn't been definitively settled in court, AFAIK. HOWEVER, and this is where my previous comment enters the picture, when developers start looking at the actual logistics of defending themselves, it quickly becomes apparent that by questioning the author's intended and specifically stated definition of "derived", one is starting to saw notches into the branch they're legally sitting on, that branch being the validity of the GPL that gave them the rights to copy and code agains the library in the first place.

Thus, the rare appearance of the GPL in actual court, because by trying to argue about the definitions of derived, the whole legal foundation the developer is standing on is being eroded.

It's also worth pointing out the effect of a severability clause and the fact that the GPL (at least v2, which I'm most familiar with) has what could be considered a NEGATIVE severability clause in effect -- that if it's impossible to satisfy patent and other legal requirements and the GPL, then distribution must cease entirely. Again, by trying to poke holes in the definition of derived, the accused violator quickly finds they are poking holes in the whole legal justification for their use of the work in the first place.

It is at this point that the reasonable offer of little more than correcting the violation going forward plus expenses, begins to look rather inviting, especially when faced with the possibility of the thing spiraling entirely out of the developer's control, possibly including a no-ship injunction (maybe complete with recall of already in-channel units) while the case proceeds, the threat alone of which is generally enough to get the accused looking to settle, even if they think they'd ultimately win the court case. Against that, opening the source or very quickly finding a replacement library so as to continue uncontested shipping while additionally paying little more than to-date expenses, looks VERY good indeed!

Of course, it also helps that AFAIK, there's a 100% success record (with one mild variance against the FSF GPL FAQ in regard to trade secrets, but it dealt with enforceability of them, not of the GPL) in not only settlements, but in the few cases that HAVE gone to court, now several, including in in Israel, several in Germany, and a couple here in the US. That's going to be STRONG encouragement to settle as well, thereby keeping the accused in control of the terms and continued distribution ability, let alone avoiding the now even worse odds of actually winning the case in court.

But even so, yes, the question of just what's derived remains, as does the fact that so far, every potential violator has decided to settle rather than risk an unfavorable resolution. (Then again, the FSF and others have never pressed the riskiest cases, either. That's one reason proprietary kernel modules continue to ship, and the threats that have happened tend to be against Linux distributions, not the module developers themselves, as the distributions tend to be far more sensitive to community PR effects in addition to the normal legal issues and thus much more likely to back down nearly instantly.)

Comment Re:Umm... (Score 2, Informative) 585

That's the beauty of the GPL, in that if the opposition (defendant, perhaps plaintiff in a declaratory judgment) doesn't accept the license, everything defaults back to standard copyright protections, which don't allow use/copying/distribution in the first place, thus being far stricter than the GPL. They are thus free not to accept the GPL, but should they do that, they better make very sure they are not infringing standard copyright, either, because that's the fallback.

That's also why the legal system is likely to rely on the author's (or license author's) clearly laid out interpretation of his chosen license as well -- unlike EULAs and the like, which seek to impose additional restrictions on top of copyright as a condition of granting limited use under copyright, the GPL does not restrict what copyright law already permits. As such, the author's interpretation of the conditions under which he grants those additional permissions holds a lot of weight because the defaults would be far stricter.

IMNLO (NL=non-lawyer), this is why the very high majority of GPL violation cases ultimately settle out of court. Once it's pointed out that should the GPL NOT be valid, the fallback is far stricter copyright law, which would put the violator in a far worse position than they are in with the GPL violation, most reasonable people quickly see the light, and decide the chances of risking even WORSE penalties for full copyright violations are simply not worth it. Thus, they find that abiding by the GPL in the future is in their best interest, and all that must then be agreed is the penalty for past violation. The FSF and friends seem to be relatively flexible in that regard, since the object is after all to get the code out there, and a settlement for costs plus a normally undisclosed but apparently reasonable donation, plus some measures (license compliance officer or etc, agreement on release breadth and deadlines) to ensure the followthru, seems to be common. Compared to the penalties a company would pay for violating a proprietary license or raw copyright law, that's very reasonable indeed, and most violators quickly see how mutually satisfactory the offered settlement conditions actually are.

Comment Re:No problem. So what's the alternative? (Score 1) 417

I'm a bit west of you, in Phoenix. I agree with the sentiment, but at least here, the practical side of it doesn't work.

Really, one of my big problems is that I find too much interesting. A few years ago I got the paper here for awhile, then quit. One problem was as I said, I found too much interesting, and it thus took longer than I normally had to read the paper every day. So they began to pile up...

What I quickly realized is that really, I'm inundated with data, and I really do find a lot of it reasonably interesting. What I'd find valuable, therefore, would be some sort of assistant that would go thru and edit and prioritize to my interests and needs, cutting out the noise and low priority stuff, to let me absorb more higher priority stuff in less time.

The newspaper as it was (and is) wasn't doing that. As such, it wasn't (and isn't) valuable to me. News on net, with various ad-filters, etc, cutting out that bit of noise, with subscriptions to various site feeds for news I find high priority interesting, etc, worked MUCH better for me, and was free.

Now the ONE thing the net was NOT effective at was presenting me local news (including ads, tech stuff like Fry's Electronics and Best Buy, department store stuff, less grocery and coupons, since couponed products tend to still be higher priced even with the coupon than generics, ESPECIALLY when the non-zero time/opportunity cost to check them all the time is taken into account) that affect me right where I live and work and play. That's what a local paper (or other local news source, on the net or elsewhere) needs to provide. If they provide national and international news, fine, but when that's available on the net in a pretty efficient form already, I'm not buying it for that.

It also needs to be in a physical format that's easy and convenient to use. Unfortunately, the local paper wasn't providing a whole lot of either of these. Yes, there was some local politics, etc, news that affects me, but the SNR was simply way too low, and it took me way too long to gather and process the information -- it wasn't very efficient.

When the called up wanting me to resubscribe, I told them I would, but only to a somewhat different product. Physically, broadsheets simply are not convenient to read. One needs way to much space to manipulate them, and when the stories get continued on some other page, manipulate them again. It's just not convenient, and there's a FAR more efficient format technology has presented us with now than the broadsheet, called the tabloid. Yes, tabloids have a bad name, but I'm interested in content I can actually ingest and use efficiently, and one has to admit they're certainly much more convenient to actually use!

Also, daily papers just wasn't working. I told them if they had a weekly or twice-weekly that featured mostly local political and etc news and carried the department store and electronics ads (and the comics for the week or at least that day would be nice too), I'd be VERY interested, and would likely buy it in a heartbeat even at the same rate they had been charging for the entire paper.

It was little surprise when they politely offered to put my number on file (yeah, probably circular file...) and get back to me if they ever came up with such a product...

Which is of course the problem. The net offers pretty close to this sort of specialization, tho unfortunately seldom on local news and happenings except in some of the bigger cities where the paper or a local TV station has a lot of that content on the web as well. But that's what users are demanding now, that sort of specialized channel, that sort of convenience, because they're used to getting it on the net, AND because in today's ever increasing "information overload" society, it's becoming an absolute NECESSITY!

Agree on the ads though. If a small banner saves me a nickle per click, I'll put up with it.

I think I'd have to disagree, not on the idea, but because in practice, it wouldn't actually save me anything for long.

I'm 42 now, and one thing I've realized the last few years, as I guess pretty much everyone does at about this age, is that life *IS* going to end before I get, perhaps even 1/10 of what I'm interested in and would /like/ to do, actually done.

As a consequence, and again, this process is anything /but/ unique to me, as I age, I've tended to focus more and more narrowly on what I /really/ want to do, what I /really/ find important and of priority, and by the same token, on things I can accomplish efficiently, making the best use of the time I have available. It is thus that while as a teen I was interested in and would read all sorts of science stuff, pretty much anything I could get my hands on, I'm much choosier now, increasingly narrowing my focus and specializing, and doing so even more as time goes on. One of the first things I deliberately chose not to pursue, as I had a friend that was into it and could have done so, was ham radio. Then it was the wider world of science, tho I do still try to keep a summary level familiarity with the biggest developments in many areas (thus my interest in /.). I focused in first on the electrical side, then electronics, then computers, then on freedomware, finding at each step of the way that I really didn't have time to deal with the rest any more at the same level of detail that I had been, if I was to find /any/ time really to further my learning in my increasingly narrow field of real interest. Of course, with the net, we're finding ever more information and detail available at our fingertips, as well, leading to information overload until some method of coping, either increasingly narrowing the field of interest or increasingly limiting the depth at which one studies the broader field (or some of both), becomes mandatory, or until we go crazy trying to do the impossible.

It is within that very practical context that I must interpret your comment quoted above, and state my disagreement at least in practice.

Here's what'd actually happen, for me: Sure, the first couple times I'd see the ads and click on them if it save me that nickle, because I'd be interested in whatever it was that lead me there. Beyond that, however, I'd quickly realize the non-zero time/opportunity cost, and despite my interest in the subject leading me to the site in general, would increasingly and probably subconsciously find my internal priority and efficiency processing algorithm directing me elsewhere -- the site would simply be too expensive in lost opportunity, either that nickle PLUS the time spent that couldn't now be spent on anything else I find interesting, OR the time spent reading PLUS the noise and hassle factor of the ad, probably insulting my intelligence since in large degree, most ads are targeted at the programmable zombies that can most effectively be influenced by others telling them what they /should/ think and buy, and less at the people who actually find thinking for themselves stimulating and pleasurable, and are thus less influenced by ads telling them what they /should/ think and buy.

Of course, this is where it all comes full circle to adblockers and dying newspapers trying to charge for their online content as they did for their offline content, perhaps vs. ads. Yes, if it's valuable enough to me, I'll pay for it, one way or another. However, given the limited time in life I have left and the perhaps 10% of what I'd /like/ to do that I'll actually /get/ to do before I die, increasing the hassle factor OR the monetary cost, very likely has the effect of moving far enough down my priority list that it no longer actually gets done, with any one of ten or a hundred or a thousand things taking its place, depending on how egregious I find the added cost vs the priority I had placed on it and the other things, and thus how far on the list it actually drops.

Now if it were just me, no big deal. However, Murdoch and others are gambling that reading their content won't drop that far in people's lists as the cost goes up, while the /. denizens predict it will. We'll see, I guess. As for me, as I said, there's more stuff on the list where that came from. If worse comes to worse and the net ceases to have any place worthwhile to visit at a cost that doesn't bump it down out of worthwhileness, I do have all those scifi books I was once interested in but never got to read before scifi dropped out of my practical priority range...

Comment Re:To my very pleasant surprise... (Score 1) 321

Meh... kde/konqueror 4.3 worked... once I enabled scripting globally (but for the specifically banned sites including google analytics, which was scripted in to run here), something I don't normally do, and wouldn't have done here had I not run firefox and known exactly what scripting I was allowing, already.

With iceweasel/firefox, noscript told me exactly what sites the page wanted to load scripting from, and if I'd wanted to, I could have called up individual scripts to inspect them (without going to the page's source code to find them) using JSView, before deciding to allow that site's scripts. That makes it **FAR** easier to manage scripts and decide what I want to allow and what not to allow. Once I saw it was just the one site, and twitter (and google analytics, which is of course set as permanently untrusted in noscript/firefox, here, and specifically disallowed in konqueror, as well), I went ahead and allowed it in firefox.

Then, knowing from firefox what sites it wanted to script, I allowed it in konqueror as well, and could compare the performance. But the thing is, without firefox's extensions, I'd have most likely given it up as not worth the botther and the risk of globally enabled /without/ verifying. That's really what konqueror is missing, now, not most of the standards compliance and rendering, but all that "extra" stuff in the form of extentions, etc, that it's impractical if not flat out impossible for a small group of developers to do, that a whole global community can do. Yes, konqueror has the ability to do extensions, but the konqueror community is simply not of critical mass. If they'd get together with Google and Apple and Nokia/Qt and whoever else is doing KHTML/Webkit based browsers now, and form a single community around a single Webkit compatible extension framework that they all supported (or even got together with Mozilla and suported its extension standard, given the huge lead it already has in that regard), then the community would acheive critical mass and it's very likely all the webkit/khtml based browsers would quickly develop the same extension community and the same flexibility in usage, including the same nice UI based per-source-site scripting viewing and permissions that the JSView and NoScript extensions give me on firefox.

Meanwhile, on Gentoo, both iceweasel/firefox and konqueror built from sources to native 64-bit amd64 code, running on a now older dual Opteron 290 (thus, dual dual-core 2.8 GHz) w/ 8 gigs RAM on kde4.3.0, iceweasel did seem to run slightly smoother here, but not really noticeably so. And of course iceweasel gave me sound. ... Which reminds me, I really just switched from kde3 over the last couple weeks, with 4.2.4, now 4.3.0... Wasn't a big thing about kde4's phonon that it had per-application audio control? How come I don't see any such thing? Or is that only with pulse-audio, which I don't have enabled here? But if so, why was it called a kde4 feature, when I guess then it would be a pulse-audio feature, not kde4-specific? I've stuck with plain alsa and the phonon's xine-engine backend piping into alsa, as I've really seen no need for the additional complication of a sound server when my audio hardware (ac97 codec on amd8111 using the analog devices 1981b/amd8111/i810/sis/nVidia/ALi ac97 alsa driver) apparently has no issues with hardware mixing. But if this so-called kde feature requires pulse-audio... maybe I'll do without as I haven't really needed it so far.

(Back on kde3, instead of runing arts, I ran I believe it was an alsaplayer wrapper, set to 50% volume, for kde's sound notifications, so my normalized sound effects didn't get too loud. That's still possible with a lot of apps. And enough apps have their own volume levels that it's not a big deal. FWIW, I finally got tired enough of amarok bloat and killing the kde3 version features I liked only to add kde4 version junk and dependencies I had no use for, that I kicked it, and run mpd with a variety of clients... AND it's own nicely light requirements music library and volume handling, now. But I don't have a kde4 version of kaffeine yet; still have the kde3 version of it. Dragonplayer just doesn't make available enough user control...)

Slashdot Top Deals

For God's sake, stop researching for a while and begin to think!

Working...