Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Most pressing issues: (Score 1) 231

FOSS Projects: E-Mail needs a replacement. Start building one. Encryption and anonymity as core of the specs. Build Branding, marketing, professional UX and proper Clients for all Plattforms. Yes, including Apple. Lets get going with this overdue problem.

We need a feasible distributed Facebook Killer. Diaspora is Meh, with shitty branding and UX and others are even worse.

These are the wrong kinds of projects for FSF to sponsor. There are armies of hipstercoders with well-trimmed beards and MacBooks in cafes attempting to tackle this problem to improve their Personal Brand. By this I mean, they are getting high attention in the contemporary world, and don't need attention-boosting by the long-sighted greybeards of FSF. They also have a low multiplier, in terms of code written and software freedom gained, since they're meant to be used by regular people and not programmers. They get more absolute use than the proposals below, but are not FSF work, just independently important work.

git is an example of a project FSF might have sponsored. People getting paid to write software who want to upstream their changes have lower barrier-to-entry through not just the convenience but the process that grows around git, so it's created a culture of upstreaming among corporate Linux users that used to fork and forget. Code review tools to replace the adhoc code review happening on the sub-mailinglists where most Linux kernel code is traded could be helpful, especially if it fits the processes of the companies actually writing the code, sign-off of ownership or whatever else they need, though they may already be happy. This would be more interesting to work on in the 90s, but is more solved now, maybe.

Another might be, a lighter-weight less-finished version of Android for use in televisions, printers, "internet of shit" devices, that communicate through passive-matrix LCDs like refrigerators or overlays like televisions. If it were rigidly-open enough that any company using it would have to give the user control of the device, and compelling enough that many companies wanted to use it, then this could increase software freedom significantly. It could perhaps even contribute if the manufacturers insisted on putting in some proprietary blobs, but I would prefer something more revolutionary. Even when included by "mere aggregation," on an embedded device binary blobs are infectious because once you allow one in, it makes tracking top-of-tree really hard. The OS could, for example, have a sandbox for blobs, like NaCl in Chrome, to reduce this infectious effect.

Another might be, an open CPU and compiler for 0.5GHz single-core applications. There was some research paper about a legacy-free low-cost CPU that was performance competitive with ARM using half the gates.

Another might be, an open toolchain for NVidia, targeting CUDA applications not graphics. I guess there is one in TensorFlow that Google just released? I don't totally understand it. but if NVidia could be convinced to use gcc to do their JIT, even if they continued releasing binary drivers, they may contribute back to a free software foundation-layer driver that's good enough to write scientific applications. This would give people a lot more computing freedom because the applications for these GPUs are broadening.

Comment This helps the NSA. (Score 2) 153

The NSA isn't supposed to spy on Americans, but if the logs are in Ireland, and are in Ireland _because_ they relate to non-US users, then the NSA is definitely allowed to get them. They can also collect data in transit more freely if both ends are outside the US, or if one end is in Ireland. This looks like a move to give NSA more freedom to spy on European Twitter users by segregating the Americans. Also, if politics in the US goes well, NSA will have less freedom to spy on Americans. This move is bet-hedging: if US politics turn anti-authoritarian, NSA won't lose as much access to Europeans because they'll be better segregated.

To judge this move correctly, you need to list all the forms of government surveillance: what organization is requesting data, why does that organization request it, is it possible to refuse the request. This is all secret, though. It's not even possible to disclose the request. The transparency reports Twitter and Google release aren't detailed enough because the government won't allow them to be, and has structured what they're allowed to release to limit debate on the methods and intentions of the government. The more interesting information requests, like the one Calyx received, have more of the now-standard threat-backed secrecy requirements around them than less interesting requests, so the outliers that should be driving debate are carefully hidden. There's no way for the public to judge the usefulness of what Twitter did. Twitter themselves has a better idea, but still not a very good idea.

I think the Europeans are less rational about this than the Americans.

  - they think there's no first-world population-control surveillance in Europe just because their spy agencies haven't had a leaker yet. NSA leaks should tell them how stupid an assumption this is, and they should be embarrassed it took the idealism of an American to expose their own authoritarianism indirectly. Instead, they are like, "oh Americans are so authoritarian. Thank God I'm European." pretty smug, guys.

  - they don't make a connection between surveillance and power. For example, NSA spies on Europeans, finds the leaders of a globalization protest movement, shares the information with GCHQ, and the leaders are detained at immigration in London until the protest is over. This is a low-hanging-fruit anti-democratic way that surveillance has been used in the past, and is a task at which bulk surveillance is good because it can reveal the structure of networks (ex. the Paul Revere metadata attack ). But it's the connection between the surveillance and the detention that matters. Instead they're worried abstractly whether they're "watched" or not. Why would an American be worried if the Stasi had a file on them? It's a problem, though, if Stasi shares their files with FBI, which in this case, they do.

  - their fears aren't proportionate. For example, some European sysadmins I spoke to fear the FISA court will approve a warrant to collect industrial espionage data through PRISM. Is this possible? Yes: the court is a rubber stamp, and if it weren't a rubber stamp it's also within spy agency skill to ask questions and disguise their goal, ex. "we think this top engineer at Xerox is into child porn so please give us complete copy of his work email." Is the fear proportionate, though? No: US is generally less corrupt than Europe when doing international business, the French in particular are notorious for industrial espionage, and there is a poor match between PRISM and industrial espionage so that US would probably use a different program and method, like exploiting employees' phones and laptops, or bribing emloyees in traditional GRU-style. For the former attack, the European response (self-host everything rather than using Google/MS/etc.) makes them more vulnerable to industrial espionage, not less. However constructing this fear provides a pretense for retaliatory protectionism, and they want to Do Something because they are Outraged. But, European-style outrage means beat up on the US rather than engage in their own broken democracies.

Europe needs to step up the game. Even this news about Twitter is something an American company has done. What has Europe done? Nothing but drink wine and eat cheese and take two months off every summer while their politics turn authoritarian and their spy agencies run unchecked. Try not to pass any more mandatory Internet censorship laws on your way to your summer houses, Europeans.

Comment Re:Lies, bullshit, and more lies ... (Score 1) 442

Instead they write a job description which is impossible, or geared to bringing in a specific foreign worker.

They do that, but it's intra-company transfer visa which is a different category than H1B.

An important thing to remember is that these agreements are somewhat reciprocal: other countries will adjust their visa programs based on what we offer their citizens. I'm in favour of easier migration if it also applies to me wrt places I'd like to move. I could've studied art, but chose to study something difficult that I knew I was good at, because I expected to get rewards like the freedom for my family to move from country to country. I often have regrets about this, and if it weren't for the rewards, and for the possibility of sharing those rewards, I wouldn't have done it. So, on one hand, yes, when someone wants to alter the deal I feel threatened, and on the other hand, easy migration is part of the deal I was promised so I'm not sure which side is trying to alter it.

However yes the billionaires are asking for this I think because they have some visceral belief that we're overcompensated, which would be a shocking hypocrisy considering their own ratio of compensation to merit: they're compensated based on their position at the nexus of capitalist power, which comes from a mixture of talent and luck, not from a bargain they made with their future. If too few of us are educated and ready to work in an advanced society where people are free to choose to study art or raise children instead of working, I think they need to pay up until the rewards for studying engineering over art match the relative needs of the society. I don't think they should pad our ranks with an influx of the desperate to shift the ground underneath us. That's not a healthy kind of migration.

And ultimately we're the ones who determine what's healthy, not them. Workers are the "stakeholders," not the neo robber barons. I don't know how that fact has become so clouded that they barely need to make a pretense of framing arguments in our service, much less obtaining our endorsement.

Comment Re:Why not merge with Android, already? (Score 1) 112

Why not merge with Android, already?

Android is unable to do any of the things that make ChromeOS worth buying, such as:
  - update all the devices together, with the same unbloated version, direct from Google, signed by Google (not the manufacturer), and allow developer access to run any code you want that can't be turned off by the manufacturer
  - promise updates for at least five years after end-of-sale
  - update in a painless manner, free from interrupting dialogs where the user equivocates over the update, consents to it thus accepting coulda-shoulda responsibility for any regressions in it, and then waits a long time while the device "updates". Update without rebooting multiple times and without taking a long time to reboot so that updates can be pushed ~weekly without upsetting people.
  - provide serious security (TPM-based disk crypto, TPM-based prevention of rollback attacks, seccomp-bpf, Google-signed code all the way to the read-only bootloader and a fuse to lock manufacturers out of the machine after testing is complete, cel radios that don't have access to the main CPU's RAM, a completely different style of sandbox than Android, and the possibility of using U2F security keys)
  - serve multiple users either with total isolation (the login screen), or without isolation on the same screen (multiprofile).
  - ship reliable devices based on solid reference designs that don't reboot randomly or experience weird slowdowns, battery drains, and radio lockups. Because of differentiation and openness, Android hardware is under more pressure and tends to be chaotic and uneven in quality, even on Nexus devices, compared to bog-simple cheapo intel laptops.
  - the ChromeOS user interface can assume you have a keyboard and an accurate pointing device, so it doesn't need to have excessive whitespace around touch targets, awkward text input state machines with glitchy on-screen keyboards and weird charade games.

ChromeOS is unable to participate in the overhyped phone ecosystem where developers want to spy on the user with evercookies, manufacturers want to push differentiating bloatware on the user, carriers want to "approve" updates and use backdoor methods to lock handsets. ChromeOS (modulo this article) doesn't participate in the narcissistic-jewelry UI churn that requires a completely different skin and set of ringtones before every Christmas, so people who find that disruptive don't have to put up with it. Android provides a place where all that can happen so that Google doesn't get locked off of phones by an Apple monopoly and can pander to users who want the things that can only be delivered by paying these prices.

There are similarities and differences that aren't essential, like ChromeOS's "web store" which has Android-style coercive permissions, or the way ChromeOS does development at an open-source HEAD while Android throws big releases over the wall, ChromeOS's efficiency at updating (it sends really well-compressed diffs), ChromeOS's efficiency at running (can open more tabs than Android in the same amount of RAM), the uncrippled version of Chrome that comes on ChromeOS vs what's on Android (the Android one isn't open source and is missing features), the DRM discrimination where videos refuse to play on "devices", the ChromeOS branding requirement that you support 5GHz wifi and an SD Card slot, etc. It would be good if they "merged" those things, taking the best from each world. These points should be merged. But the former points, it's hard to see how they can be reconciled.

I think it's a good idea to do large engineering projects more than once, in general, to help avoid getting trapped in local minima. Engineers tend to double down on quasi-religious assumptions and become very stubborn about them, so that competition is the only way to shake them loose. I think your dismissiveness is evidence of this, and I think a lot of this recurring call to "merge" ChromeOS is rooted in one project feeling threatened by the other. Unfortunately it's not symmetrical. Android fanbois feel threatened, so they say, "merge them into us," or "make the browser the browser and the OS the OS." ChromeOS fanbois feel threatened, so they say, "leave us alone."

My Chromebooks are pretty poor performers and as the months move on they get slowly worse.

My $200 chromebooks perform poorly, and my $400 chromebooks perform well. However, both perform better than a $500 phone, and the performance over time is totally consistent.

If it were not consistent, there's a "powerwash" button that's more useful than Android factory reset because most of your state will be resynced from cloud after the device is cleared. However, in my experience so far, performance is consistent, and there's no need for a wipe button to improve performance. Maybe you saw something different, but:
  - I think you are biased, and I can't replicate your result.
  - ChromeOS performance is better than Android phone performance, if you hold price and device age equal between them

Comment Re: Strong public relations (Score 1) 200

Let me know when Ubuntu installer supports plausibly-deniable disk encryption, or ChromeOS supports plausibly-nonexistent hidden profiles. These are the only two kinds of disk encryption I can easily use, and neither supports plausible deniability.

It also doesn't solve the problem of, "We've identified these GMail and Facebook accounts as yours. Please login to them or go to jail." I don't think we have "cloud" plausible deniability, and for the case of social networks it doesn't seem feasible.

You're giving hypothetical solutions to toy problems, not finished solutions to practical problems.

Comment Re:The right to read. (Score 1) 72

Authors have right to be paid for their work.

Nobody has a "right" to be paid for their work. If I doodle in a sketchbook, or shit in a bucket on stage, there is no fundamental right to be paid for that just because it feels like work to me, or looks like work to a bystander. In the same way, those guys that wander into traffic and wash your windshield without asking you don't deserve to be paid for making driving more stressful.

This is relevant to the discussion because "right to be paid for work" soundbite is used in enthymemes like this:

1. $random_thing happened.
2. We could (a) pay the artist every time it happens, or (b) not pay the artist, some or all of the times it happens.
3. The right answer is (a) because artists and authors have a right to be paid for their work, and the situation wouldn't have arisen if they weren't working.

I agree we should pay artists and authors to convince them to do more work. Beyond that, I also agree artists should earn enough to live with dignity, matching our respect for the class of work they do, and this isn't happening uniformly enough right now. Neither is the same as "right to be paid for work" because neither implies that they should be paid every time a thing happens, nor even that they need to be paid specifically _for work_ at all (not that it's necessarily a bad idea to pay them "for work", just that it's not implied by what I agree with, and is not a "right").

This is a ridiculous argument, and we should stop making it. You're making it even sillier by adding,

  4. But we could also (a) make an exception, or (b) not make an exception.
  5. Right answer is (a) because "mah propertah," or something.

Instead we need to go back to 1 - 3 and make them complicated enough to capture what's really going on.

Comment Re:Almost makes me want to live there (Score 1) 77

Don't let todays' positive news cycle make you forget that the "EU government" are also the ones who passed the mandatory data retention law, which is worse than anything going on above the table in the US.

This shouldn't be reduced to an ad-hominem comparison of countries, but I admit I also thought, "Now it's harder for the US to pass a mandatory data retention law."

Comment oversimplified PR noise ignores decade of research (Score 4, Interesting) 105

The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity. It reminds me of my dad's story about his buddy who tried to make his car go faster by cutting a hole in the firewall underneath the gas petal so he could push it down further.

There's lots of research on this dating back to the 90's, starting with CBQ and RED. The existing research is underdeployed, and merely shortening the buffers is definitely the wrong move. We should use an adaptive algorithm like BLUE or DBL, which are descendents of RED. These don't have constants that need tuning like queue-length (FIFO/bufferbloat) or drop probability (RED), and they're meant to handle TCP and non-TCP (RTP/UDP) flows differently. Linux does support these in 'tc', but (1) we need to do it by default, not after painful amounts of undocumented configuration, and (2) to do them at >1Gbit/s ideally we need NIC support. FWIH Cisco supports DBL in cat45k sup4 and newer but I'm not positive, and they leave it off by default.

For file sharing, HFSC is probably more appropriate. It's the descendent of CBQ, and is supported in 'tc'. But to do any queueing on cable Internet, Linux needs to be running, with 'tc', *on the cable modem*. With DSL you can somewhat fake it because you know what speed the uplink is, so you can simulate the ATM bottleneck inside the kernel and then emit prescheduled packets to the DSL modem over Ethernet. The result is that no buffer accumulates in the DSL modem, and packets get layed out onto the ATM wire with tiny gaps between them---this is what I do, and it basically works. With cable you don't know the conditions of the wire so this trick is impossible. Also, end users can only effectively schedule their upstream bandwidth, so ISP's need to somehow give you control of the downstream, *configurable* control through reflected upstream TOS/DSCP bits or something, to mark your filesharing traffic differently since obviously we can't trust them to do it.

Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.

Comment Re:Roku is linux (Score 1) 481

A Hollywood movie on a DVD-R (no DeCSS) would be compatible with software freedom so far as I can see, because I don't think there's any difficulty playing it with vlc.

But, for example, it's difficult to sell a DVD player that lets you skip the FBI warnings and the ads at the beginning of kids' DVD's, or to turn off macrovision on an NTSC output, or to play DVD's from all regions, or take .jpg screenshots of a playing DVD, because a network of laws (DMCA), industry associations granting licenses (DVDCCA), and key material with ``renewability'' regimes and silly copyright protection (the CSS player keys) forces player manufacturers to obey the instructions on the disk rather than the player's owner's commands. If you had software freedom, like you do when you assemble a free software player based on vlc + libdecss including all player keys + a dangersbrothers rpc1 ROM for your DVD drive, then you would immediately remove all these restrictions if they still existed by recompiling vlc. Thus the DMCA+DVDCCA+CSS network has to prevent you from playing DVD's with vlc period if they want to keep their content inside the DRM wrapper. Note that I'm not saying it's currently impossible to buy a region-free DVD player---obviously it IS possible, but it has in the past been difficult. It's subject to legal circumstance. However if a DVD player with software freedom existed, it would be region-free and not have any of these other restrictions on it either becuase the first geek who touched it would remove them all and then share his work: THAT part is fundamental and not subject to circumstance, that vlc never had any of these restrictions whenever it's been able to play a DVD.

It's DRM that's incompatible with software freedom, not the movie itself. I'm not confused: movies without DRM do exist, and need not be $0 free nor resamplable free to be without-DRM.

What's more, DRM's not incompatible with Linux. DRM can be done on Linux because Linux is locked at GPLv2. If Linux could be upgraded to GPLv3 (it can't bc Linus deleted the ``or any later version'' clause and didn't centralize copyright ownership), then DRM on Linux would become a lot more difficult because the Tivoizing and Appstore-chain-of-signatures tricks wouldn't work, but I bet it would still be easy in practice to do sufficiently frustrating DRM under a GPLv3 kernel.

Comment Re:Roku is linux (Score 5, Insightful) 481

This is a great point: Linux isn't incompatible with DRM, but open source is. If you gave people a DRM player for which they truly had in-practice software freedom, the first thing they'd do is remove all the DRM.

The post confuses Linux and open source, but Netflix is still fundamentally an anti-software-freedom company because their entire business is built on DRM which will always be incompatible with software freedom.

Actually writing a Linux client has nothing to do with any of this. The streaming part of Netflix's business makes them into subcontractors of the Hollywood studios: they deliver Hollywood content to eyeballs with iron-clad digital restrictions management in exchange for a cut of the fees flowing back to the studios. DRM is their entire business. They will always be primarily harmful to any real movement for software freedom.

Linux actually makes a great DRM platform: TiVo invented a whole term for it, ``tivoization'', where you have all the source code and ability to recompile the kernel, but then you can't run it anywhere because the hardware only runs signed kernels.

Likewise, I think the Android app store is extending this all the way down to the userland, right? where for example Skype will only run on phones with ``untampered'' google-signed kernels and hardware? I might be wrong---hard to keep up.

Anyway, why wasn't the DRM vs. software freedom point in the first post? I thought every Linux user knew this. Do people really think Linux == $0, and that's that?

Comment Re:This is well known to a small community (Score 5, Insightful) 123

Yes, that's my understanding as well---the point of slow start is to go easy on the output queues of whichever routers experience congestion, so if congestion happens only on the last mile a hypothetical bad slow-start tradeoff does indeed only affect that one household (not necessarily only that one user), but if it happens deeper within the Internet it's everyone's problem contrary to what some other posters on this thread have been saying.

WFQ is nice but WFQ currently seems to be too complicated to implement in an ASIC, so Cisco only does it by default on some <2Mbit/s interfaces. Another WFQ question is, on what inputs do you do the queue hash? For default Cisco it's on TCP flow, which helps for this discussion, but I will bet you (albeit a totally uninformed bet) that CMTS will do WFQ per household putting all the flows of one household into the same bucket, since their goal is to share the channel among customers, not to improve the user experience of individual households---they expect people inside the house to yell at each other to use the internet ``more gently'' which is pathetic. In this way, WFQ won't protect a household's skype sessions from being blasted by MS fast-start the way Cisco default WFQ would.

If anything, cable plants may actually make TCP-algorithm-related congestion worse because I heard a rumor they try to conserve space on their upstream channel by batching TCP ACK's, which introduces jitter, meaning the windowsize needs to be larger, and makes TCP's downstream more ``microbursty'' than it needs to be. If they are going to batch upstream on purpose, maybe they should timestamp upstream packets in the customer device and delay them in the CMTS to simulate a fixed-delay link---they could do this TS+delay per-flow rather than per-customer if they do not want to batch all kinds of packets (ex maybe let DNS ones through instantly).

RED is not too complicated to implement in ASIC, but (a) I think many routers, including DSLAM's, actually seem to be running *FIFO* which is much worse than RED even, because it can cause synchronization when there are many TCP flows---all the flows start and stop at once. (b) RED is not that good because it has parameters that need to be tuned according to approximately how many TCP flows there are. I think BLUE is much better in this respect, and is also simple enough to implement in ASIC, but AFAIK nobody has.

I think much of the conservatism on TCP implementers' part can be blamed on router vendors failing to step up and implement decades-old research on practical ASIC-implementable queueing algorithms. I've the impression that even the latest edge stuff focuses on having deep, stupid (FIFO) queues (Arista?) or minimizing jitter (Nexus?). Cisco has actually taken RED *off* the menu for post-6500 platforms: 3550 had it on the uplink ports, but 3560 has ``weighted tail drop'' which AFAICT is just fancy FIFO. I'd love to be proved wrong by someone who knows more, but I think they are actually moving backwards rather than stepping up and implementing BLUE.

and I like very much your point that cacheing window sizes per /32 is the right way to solve this rather than haggling about the appropriate default, especially in the modern world of megasites and load balancers where a clever site could appear to share this cached knowledge quite widely. but IMSHO routing equipment vendors need to be stepping up to the TCP game, too.

Comment Re:DESQview (Score 1) 347

DESQview was more like VMware than it was like present-day Windows NT/XP/... It had some hypervisor memory management syscalls that were standardized, and could be called by programs running within it. The standard was called LIM EMS or something, lotus/intel/microsoft expanded memory specification, and it was implemented by QEMM386.SYS using the 80386 vm instructions, but it was also implemented by hardware ISA memory expansion cards with MMU's on them---I don't think many people bought these, but you could use >1MB on 286, and it was compatible with the DESQview API. This API was later absorbed by microsoft (DRIVER=EMM386.EXE). People forget how important this API was: for many years it was common to have a computer with 2 - 16MB of RAM, but no reasonable way to get programs to actually use that RAM. One way was to use programs like DESQview to run several API-less programs at once, sort of like how we can now virtualize many 4GB 32-bit guests on a 128GB 64-bit host. aanother way to get at the extra ram was through these ad-hoc-MMU bank-swapping PAE-like API's. The most common way was to burn it on useless things like TSR disk caches and pop-up PIM's.

The forgotten part of the story here is how much *intel* sucked. The intel suckage and the microsoft suckage were complimentary, and fed off and enhanced and prolonged each other. The whole platform was like a hardware-only game console, with no API toolkit available after signing onerous NDA's and royalties, just bare hardware.

Comment no, they really didn't have 2.6 support. (Score 2, Informative) 160

This post is extremely dishonest. If you've actually installed enough to get that output, that necessarily means you already realize (1) you installed from some experimental .tar.gz file with all kinds of undocumented tampering, meant for development, not from the actual release .iso the way the 2.4 'lx' brand installs, so 'cat /etc/redhat-release' doesn't actually mean the installer ran up to that point which is something it would imply to any reasonable individual. In fact the GNU tar that extracted that .tar.gz was probably the solaris one, not even Linux tar.

And (2) it's so broken that basic programs like 'rm' don't run! That page says, b131 was the first one with enough basic syscalls for 'rm' to work. and lx brand was moved to the attic in b143 (search for EOF lx brand).

This field is full of overwhelming arcania, and without the good faith effort of people like yourself we'll make bad decisions and garble our own history. Please don't spew out deliberately misleading teasers just for the contrary LULZ of it.

Comment Re:But ... (Score 4, Informative) 160

BrandZ never supported newer than CentOS 3.8 because it emulated Linux 2.4 kernel. It was killed and put in the attic before the Oracle takeover. Also the emulation was never good enough to run apache. I don't think it was ever used very much except internally to run 'acroread', but Sun sure did flog it to death at every users group marketing event. Half of the Solaris 10 Promises they actually did fully, usefully deliver, albeit a couple years late, but BrandZ wasn't one of them.

I would say Xen is a better way to run Linux than VirtualBox. There's a lot of work in OpenSolaris on polishing Xen, though unfortunately, (1) Xen isn't in OpenIndiana, and (2) you can't run VirtualBox and Xen at the same time. :)

There's stuff in Solaris that doesn't get nearly enough credit though, like Crossbow 10gig NIC acceleration similar to RPS & RFS in Linux, Infiniband support and NFS-RDMA transport, 'eventports' (an Nginx-friendly feature similar to epoll and kqueue), and the integration between the ipkg package system and ZFS, and mdb (everyone talks about dtrace, but no one about mdb). Then there's stuff that just shockingly sucks, like JDS and ipfilter and the permanent lack of a Chromium port.

Slashdot Top Deals

Another megabytes the dust.