Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:The right to read. (Score 1) 72

Authors have right to be paid for their work.

Nobody has a "right" to be paid for their work. If I doodle in a sketchbook, or shit in a bucket on stage, there is no fundamental right to be paid for that just because it feels like work to me, or looks like work to a bystander. In the same way, those guys that wander into traffic and wash your windshield without asking you don't deserve to be paid for making driving more stressful.

This is relevant to the discussion because "right to be paid for work" soundbite is used in enthymemes like this:

1. $random_thing happened.
2. We could (a) pay the artist every time it happens, or (b) not pay the artist, some or all of the times it happens.
3. The right answer is (a) because artists and authors have a right to be paid for their work, and the situation wouldn't have arisen if they weren't working.

I agree we should pay artists and authors to convince them to do more work. Beyond that, I also agree artists should earn enough to live with dignity, matching our respect for the class of work they do, and this isn't happening uniformly enough right now. Neither is the same as "right to be paid for work" because neither implies that they should be paid every time a thing happens, nor even that they need to be paid specifically _for work_ at all (not that it's necessarily a bad idea to pay them "for work", just that it's not implied by what I agree with, and is not a "right").

This is a ridiculous argument, and we should stop making it. You're making it even sillier by adding,

  4. But we could also (a) make an exception, or (b) not make an exception.
  5. Right answer is (a) because "mah propertah," or something.

Instead we need to go back to 1 - 3 and make them complicated enough to capture what's really going on.

Comment Re:Almost makes me want to live there (Score 1) 77

Don't let todays' positive news cycle make you forget that the "EU government" are also the ones who passed the mandatory data retention law, which is worse than anything going on above the table in the US.

This shouldn't be reduced to an ad-hominem comparison of countries, but I admit I also thought, "Now it's harder for the US to pass a mandatory data retention law."

Comment oversimplified PR noise ignores decade of research (Score 4, Interesting) 105

The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity. It reminds me of my dad's story about his buddy who tried to make his car go faster by cutting a hole in the firewall underneath the gas petal so he could push it down further.

There's lots of research on this dating back to the 90's, starting with CBQ and RED. The existing research is underdeployed, and merely shortening the buffers is definitely the wrong move. We should use an adaptive algorithm like BLUE or DBL, which are descendents of RED. These don't have constants that need tuning like queue-length (FIFO/bufferbloat) or drop probability (RED), and they're meant to handle TCP and non-TCP (RTP/UDP) flows differently. Linux does support these in 'tc', but (1) we need to do it by default, not after painful amounts of undocumented configuration, and (2) to do them at >1Gbit/s ideally we need NIC support. FWIH Cisco supports DBL in cat45k sup4 and newer but I'm not positive, and they leave it off by default.

For file sharing, HFSC is probably more appropriate. It's the descendent of CBQ, and is supported in 'tc'. But to do any queueing on cable Internet, Linux needs to be running, with 'tc', *on the cable modem*. With DSL you can somewhat fake it because you know what speed the uplink is, so you can simulate the ATM bottleneck inside the kernel and then emit prescheduled packets to the DSL modem over Ethernet. The result is that no buffer accumulates in the DSL modem, and packets get layed out onto the ATM wire with tiny gaps between them---this is what I do, and it basically works. With cable you don't know the conditions of the wire so this trick is impossible. Also, end users can only effectively schedule their upstream bandwidth, so ISP's need to somehow give you control of the downstream, *configurable* control through reflected upstream TOS/DSCP bits or something, to mark your filesharing traffic differently since obviously we can't trust them to do it.

Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.

Comment Re:Roku is linux (Score 1) 481

A Hollywood movie on a DVD-R (no DeCSS) would be compatible with software freedom so far as I can see, because I don't think there's any difficulty playing it with vlc.

But, for example, it's difficult to sell a DVD player that lets you skip the FBI warnings and the ads at the beginning of kids' DVD's, or to turn off macrovision on an NTSC output, or to play DVD's from all regions, or take .jpg screenshots of a playing DVD, because a network of laws (DMCA), industry associations granting licenses (DVDCCA), and key material with ``renewability'' regimes and silly copyright protection (the CSS player keys) forces player manufacturers to obey the instructions on the disk rather than the player's owner's commands. If you had software freedom, like you do when you assemble a free software player based on vlc + libdecss including all player keys + a dangersbrothers rpc1 ROM for your DVD drive, then you would immediately remove all these restrictions if they still existed by recompiling vlc. Thus the DMCA+DVDCCA+CSS network has to prevent you from playing DVD's with vlc period if they want to keep their content inside the DRM wrapper. Note that I'm not saying it's currently impossible to buy a region-free DVD player---obviously it IS possible, but it has in the past been difficult. It's subject to legal circumstance. However if a DVD player with software freedom existed, it would be region-free and not have any of these other restrictions on it either becuase the first geek who touched it would remove them all and then share his work: THAT part is fundamental and not subject to circumstance, that vlc never had any of these restrictions whenever it's been able to play a DVD.

It's DRM that's incompatible with software freedom, not the movie itself. I'm not confused: movies without DRM do exist, and need not be $0 free nor resamplable free to be without-DRM.

What's more, DRM's not incompatible with Linux. DRM can be done on Linux because Linux is locked at GPLv2. If Linux could be upgraded to GPLv3 (it can't bc Linus deleted the ``or any later version'' clause and didn't centralize copyright ownership), then DRM on Linux would become a lot more difficult because the Tivoizing and Appstore-chain-of-signatures tricks wouldn't work, but I bet it would still be easy in practice to do sufficiently frustrating DRM under a GPLv3 kernel.

Comment Re:Roku is linux (Score 5, Insightful) 481

This is a great point: Linux isn't incompatible with DRM, but open source is. If you gave people a DRM player for which they truly had in-practice software freedom, the first thing they'd do is remove all the DRM.

The post confuses Linux and open source, but Netflix is still fundamentally an anti-software-freedom company because their entire business is built on DRM which will always be incompatible with software freedom.

Actually writing a Linux client has nothing to do with any of this. The streaming part of Netflix's business makes them into subcontractors of the Hollywood studios: they deliver Hollywood content to eyeballs with iron-clad digital restrictions management in exchange for a cut of the fees flowing back to the studios. DRM is their entire business. They will always be primarily harmful to any real movement for software freedom.

Linux actually makes a great DRM platform: TiVo invented a whole term for it, ``tivoization'', where you have all the source code and ability to recompile the kernel, but then you can't run it anywhere because the hardware only runs signed kernels.

Likewise, I think the Android app store is extending this all the way down to the userland, right? where for example Skype will only run on phones with ``untampered'' google-signed kernels and hardware? I might be wrong---hard to keep up.

Anyway, why wasn't the DRM vs. software freedom point in the first post? I thought every Linux user knew this. Do people really think Linux == $0, and that's that?

Comment Re:This is well known to a small community (Score 5, Insightful) 123

Yes, that's my understanding as well---the point of slow start is to go easy on the output queues of whichever routers experience congestion, so if congestion happens only on the last mile a hypothetical bad slow-start tradeoff does indeed only affect that one household (not necessarily only that one user), but if it happens deeper within the Internet it's everyone's problem contrary to what some other posters on this thread have been saying.

WFQ is nice but WFQ currently seems to be too complicated to implement in an ASIC, so Cisco only does it by default on some <2Mbit/s interfaces. Another WFQ question is, on what inputs do you do the queue hash? For default Cisco it's on TCP flow, which helps for this discussion, but I will bet you (albeit a totally uninformed bet) that CMTS will do WFQ per household putting all the flows of one household into the same bucket, since their goal is to share the channel among customers, not to improve the user experience of individual households---they expect people inside the house to yell at each other to use the internet ``more gently'' which is pathetic. In this way, WFQ won't protect a household's skype sessions from being blasted by MS fast-start the way Cisco default WFQ would.

If anything, cable plants may actually make TCP-algorithm-related congestion worse because I heard a rumor they try to conserve space on their upstream channel by batching TCP ACK's, which introduces jitter, meaning the windowsize needs to be larger, and makes TCP's downstream more ``microbursty'' than it needs to be. If they are going to batch upstream on purpose, maybe they should timestamp upstream packets in the customer device and delay them in the CMTS to simulate a fixed-delay link---they could do this TS+delay per-flow rather than per-customer if they do not want to batch all kinds of packets (ex maybe let DNS ones through instantly).

RED is not too complicated to implement in ASIC, but (a) I think many routers, including DSLAM's, actually seem to be running *FIFO* which is much worse than RED even, because it can cause synchronization when there are many TCP flows---all the flows start and stop at once. (b) RED is not that good because it has parameters that need to be tuned according to approximately how many TCP flows there are. I think BLUE is much better in this respect, and is also simple enough to implement in ASIC, but AFAIK nobody has.

I think much of the conservatism on TCP implementers' part can be blamed on router vendors failing to step up and implement decades-old research on practical ASIC-implementable queueing algorithms. I've the impression that even the latest edge stuff focuses on having deep, stupid (FIFO) queues (Arista?) or minimizing jitter (Nexus?). Cisco has actually taken RED *off* the menu for post-6500 platforms: 3550 had it on the uplink ports, but 3560 has ``weighted tail drop'' which AFAICT is just fancy FIFO. I'd love to be proved wrong by someone who knows more, but I think they are actually moving backwards rather than stepping up and implementing BLUE.

and I like very much your point that cacheing window sizes per /32 is the right way to solve this rather than haggling about the appropriate default, especially in the modern world of megasites and load balancers where a clever site could appear to share this cached knowledge quite widely. but IMSHO routing equipment vendors need to be stepping up to the TCP game, too.

Comment Re:DESQview (Score 1) 347

DESQview was more like VMware than it was like present-day Windows NT/XP/... It had some hypervisor memory management syscalls that were standardized, and could be called by programs running within it. The standard was called LIM EMS or something, lotus/intel/microsoft expanded memory specification, and it was implemented by QEMM386.SYS using the 80386 vm instructions, but it was also implemented by hardware ISA memory expansion cards with MMU's on them---I don't think many people bought these, but you could use >1MB on 286, and it was compatible with the DESQview API. This API was later absorbed by microsoft (DRIVER=EMM386.EXE). People forget how important this API was: for many years it was common to have a computer with 2 - 16MB of RAM, but no reasonable way to get programs to actually use that RAM. One way was to use programs like DESQview to run several API-less programs at once, sort of like how we can now virtualize many 4GB 32-bit guests on a 128GB 64-bit host. aanother way to get at the extra ram was through these ad-hoc-MMU bank-swapping PAE-like API's. The most common way was to burn it on useless things like TSR disk caches and pop-up PIM's.

The forgotten part of the story here is how much *intel* sucked. The intel suckage and the microsoft suckage were complimentary, and fed off and enhanced and prolonged each other. The whole platform was like a hardware-only game console, with no API toolkit available after signing onerous NDA's and royalties, just bare hardware.

Comment no, they really didn't have 2.6 support. (Score 2, Informative) 160

This post is extremely dishonest. If you've actually installed enough to get that output, that necessarily means you already realize (1) you installed from some experimental .tar.gz file with all kinds of undocumented tampering, meant for development, not from the actual release .iso the way the 2.4 'lx' brand installs, so 'cat /etc/redhat-release' doesn't actually mean the installer ran up to that point which is something it would imply to any reasonable individual. In fact the GNU tar that extracted that .tar.gz was probably the solaris one, not even Linux tar.

And (2) it's so broken that basic programs like 'rm' don't run! That page says, b131 was the first one with enough basic syscalls for 'rm' to work. and lx brand was moved to the attic in b143 (search for EOF lx brand).

This field is full of overwhelming arcania, and without the good faith effort of people like yourself we'll make bad decisions and garble our own history. Please don't spew out deliberately misleading teasers just for the contrary LULZ of it.

Comment Re:But ... (Score 4, Informative) 160

BrandZ never supported newer than CentOS 3.8 because it emulated Linux 2.4 kernel. It was killed and put in the attic before the Oracle takeover. Also the emulation was never good enough to run apache. I don't think it was ever used very much except internally to run 'acroread', but Sun sure did flog it to death at every users group marketing event. Half of the Solaris 10 Promises they actually did fully, usefully deliver, albeit a couple years late, but BrandZ wasn't one of them.

I would say Xen is a better way to run Linux than VirtualBox. There's a lot of work in OpenSolaris on polishing Xen, though unfortunately, (1) Xen isn't in OpenIndiana, and (2) you can't run VirtualBox and Xen at the same time. :)

There's stuff in Solaris that doesn't get nearly enough credit though, like Crossbow 10gig NIC acceleration similar to RPS & RFS in Linux, Infiniband support and NFS-RDMA transport, 'eventports' (an Nginx-friendly feature similar to epoll and kqueue), and the integration between the ipkg package system and ZFS, and mdb (everyone talks about dtrace, but no one about mdb). Then there's stuff that just shockingly sucks, like JDS and ipfilter and the permanent lack of a Chromium port.

Comment Internet == network of networks. (Score 1) 467

A home router capable of running OpenWRT VPN packages, such as a Fonera or a Sheevaplug, and then store files on your home server. The Fonera has pretty control panels produced by funded developers, so the software is pretty good, but its radio has a blob driver, and its memory and CPU capability makes it seem like a ripoff compared to the Sheevaplug which has more than 4x of both.

There are many different kinds of VPN: OpenVPN is probably best at busting through firewalls, while L2TP/IPsec has clients pre-integrated into proprietary operating systems.

You will also need to set up dynamic DNS on this router, and worry about the un-neutral port blocking or no-servers AUP your ISP might do.

I use a plain IPsec VPN based on proprietary Cisco software, which is something you can also do with eBay, but this is definitely not the wise approach for someone with no budget or experience, and a dynamic IP address.

Once the VPN is done you can get to your files almost the same way you do at home, only slower, and ``browsing'' won't work. but ``map network drive'' and Command-K will work just as they do at home, if you use an IP address. There is no monthly fee, and you keep all the files in your possession where a dishonest or over-cooperative ``cloud'' company can't eagerly turn them over in response to secret police state letters, curious advertisers, or civil lawsuits.

The internet should be connecting everyone together. It's not a service delivery platform for cloud providers, although you may think that if you read too many of the ads these companies post, and internalize too many of the un-neutral restrictions last-mile carriers place on your access.

Comment Re:Tinfoil hat mode (Score 1) 248

A proper telecom company would charge per MB, and 0MB = $0, and the MB costs the same amount whether it's a Facebook megabyte, a Youtube megabyte, an ssh megabyte, a VPN megabyte to relakks/swissvpn/$yourcompany, an SMS megabyte, a web megabyte, a megabyte that involves turning on the GPS sensor you paid for inside your phone, or a megabyte to a tethered laptop. This is called ``neutrality'' and it leads us in exactly the opposite direction you imply---in the direction of controlling and auditing what are phones send and receive (user-imposed caps per app, no phoning home, no auto upgrades).

the upstream might rightly be priced differently than the downstream because it's more expensive over radio in general, because the difference in cost between small packets and big ones is much greater in the upstream direction, and because voip/CIR small-packet upstream is cheaper than ssh upstream because the packets arrive on a fixed schedule that might take advantage of unsolicited grants if the radio would learn to support that (i doubt any do, yet).

but asking for ``unlimited'' plans (which are never actually unlimited because of retarded clauses about ``excessive'' use and secret 5GB caps) leads directly to un-neutral connections that discriminate price based on content, or characteristics of the terminal like screen size and software freedom. People should stop asking for unlimited, man up, and control/audit their use.

Comment Re:ZFS? (Score 1) 300

yeah ZFS-FUSE is an old version of the Sun ZFS code, and only the latest versions are really safe & feature complete (ex. zpool import -F). even if it weren't for the performance worry, do not fuck around with ZFS-FUSE. it is only for fun. Nexenta and the OpenSolaris CD's at genunix.org have almost the most recent ZFS. FreeBSD has, meh, newer than ZFS-FUSE version but not really new enough at last glance.

Comment Re:"does not yet support my older 2.4 Linux server (Score 3, Informative) 300

+1.

u r doing it rong.

If you need to keep around such old software, it needs to be running inside Xen/VirtualBox and/or become NFS-booted so that it's insulated from the hardware. That way, you're not forced to keep around old hardware to run your old software. If you insulate with Xen/VBox at the block level you can use LVM2 on the host system to do snapshots but are still constrained by the legacy filesystem. If you NFS-boot, you can use future filesystem-level snapshottable Linux filesystems to do snapshots, or you could buy a NetApp and use proprietary software to do the snapshots, or Solaris (if it sticks around), or any of a variety of things. You can argue about which level of storage insulation is best in the long run: the filesystem level has certain advantages at least with ZFS and probably with whatever future Linux snapshotting filesystem comes along becuase of variable block size: small files get blocks in 512-byte increments, and any file larger than 128kB gets a multiple-of-128kB sized block. Since blocks are the unit of compression and deduplication, you want the largest size block possible, but not too large or you suffer from the read-modify-write RAID5-like tax. A larger block will compress better and takes only one entry in the deduplication hash table. If you insulate at the block layer then all blocks will have to be the same size and a relatively small size, which makes compression and dedup work not nearly as well because they can't give big blocks to big files. however for old software you may want to change as little as possible, and especially with sketchy linux 2.4 NFS clients maybe it's not safe to run certain apps over an NFS root, or maybe your ancient distribution doesn't support NFS booting well.

One way or another, though, you need to find a way to keep the apps running while minimizing the blocks of ancient code on which you're still dependent, and this should be your overriding concern. You need to structure your plans to encapsulate the code subject to bit-rot rather than flailing around on /. for some freshmeat app-of-the-day that claims to solve * with FUSE and some stupid Perl script. This is the difference between a serious professional douchebag who surveys the industry and can smell the difference between high and low quality, and a pathetic flailing do-my-homework-for-me everything-inclding-windows-has-stengths-and-weaknesses-right-tool-for-the-job doomed blinking medicated idiot douchebag. Do not be that douchebag.

Slashdot Top Deals

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...