Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Contribution Bounties and other thoughts (Score 1) 85

One thing you could do is write up a custom license and have contribution bounties.

The license would go something like this:

  - Only "Your Company" can sell either the code (any part of it), any derived works based on the code, or the binaries.
  - Any party who pays for a license for the "Ultimate Plan" (or whatever you want to call it) gets a copy of the source code.
  - Any party that does not have an "Ultimate Plan" does NOT get the source code, and MAY NOT distribute any binaries they get, whether purchased from you or given to them by others.
  - Anyone who has a legit copy of the source code may distribute binaries (modified or originals) to any third-party they wish, royalty-free. That third party can only use and distribute the binaries but may not modify or view the code unless they also have an "Ultimate Plan" with Your Company.
  - Anyone who distributes modified binaries to others containing changes that they themselves made using the source code, MUST provide the source code changes they made back to you (for free).
  - Anyone who does NOT distribute modified binaries to others MAY do one of two things: either keep all their code/binary changes a "secret" (don't share them with anyone), OR they may submit them for a Contribution Bounty to Your Company. Your Company will look at the source code (without storing a copy of it or taking any ideas from it) and place a value on it, and offer a "take it or leave it" monetary compensation for the contribution. If accepted, Your Company will receive copyright attribution to the code, so you can do whatever you want with it. If refused, Your Company must destroy all copies of the code/binaries you got from them.

So we'd have situations like the following:

Scenario A: The University of Vulcan buys your Ultimate Plan and has the source code. Their researcher comes up with an innovative new algorithm that will make your software better. They use it and your product in a paper, and distribute their compiled binaries so other researchers can test it. As soon as they go to distribute modified binaries to third parties, they realize that they are now required to give you back the source code they changed. They comply with the license and do so. You win because you get code to enhance your product for free. The university wins because they can give away their modified copy of your product to other researchers as a way of demonstrating their work.

Scenario B: The University of Tassadar buys your Ultimate Plan and has the source code. Their researcher finds a neat security vulnerability in your product. They don't have any incentive to share it with anyone else, but they want to bring in some revenue from it, so they sell the vulnerability details and patch to fix it to you for $5000. The university wins because they got some money (and the guy can probably write a paper on it). You win because your product has one less security vulnerability.

Scenario C: An independent self-taught researcher wants to use your product. They are not a very advanced user so they don't have a special need to modify the code. For a reasonable price that a person making a living wage (or slightly less) can afford, they go to your site and purchase a "Basic Plan" that gives them access to the binaries and some type of support. The researcher wins because he got a good product at an affordable price. You win because you made money.

Scenario D: An independent researcher knows a friend who's also a researcher as part of a large institution with an Ultimate Plan. The independent guy wants to use your product, but is very poor. Since he knows the guy from the institution, he gets him to provide a binary, free of charge, so he can work on stuff that needs to use a product like this. The independent researcher benefits by getting access to your product for free without breaking his budget. The institution benefits by increasing its networking with the larger scientific community. You benefit because you already sold the university an Ultimate Plan (probably very expensive) to grant them the license to share the binaries in the first place.

A scheme like this is:

(1) Complicated,
(2) Hard to explain in just a few brief sentences,
(3) Likely to be deliberately skirted around by researchers looking just to "get a job done",
(4) Also pretty likely to be accidentally skirted around by people who do not really understand the full details of your license terms,
(5) Probably not going to net you any more money than if you made a much simpler and more straightforward marketing plan (either proprietary or open source).

This is my second submission to answering this /. post, and in both posts, I've found through reflection that the ideas I've presented to you for a scheme to "make money while making it open source or kinda open source" is not very *good* for a variety of reasons. It doesn't seem to be an extremely tenable position. It makes things cumbersome, difficult to understand, and full of "gotchas" that can really piss people off.

In conclusion, I'd say that you should not attempt any half-measures. Either go straight for a true Free and Open Source license (preferably copyleft, but hey, the Apache 2.0 license isn't bad), or lock it up and sell licenses. Don't do anything in between.

Comment Reciprocal Public License (Score 1) 85

As much as I find it distasteful to recommend a license that is Open Source but NOT Free Software, you may want to check out the Reciprocal Public License: http://opensource.org/licenses...

It basically says that anyone who makes any changes to the software, whether they're "just internal" or distributed to a third party, must share those changes back to the community.

It would allow your company to release your software under Copyleft terms similar to the GPL, but with an added *restriction* that prevents companies (and individuals) from modifying your code to suit their purposes and then not distributing the code to you.

Of course, people are free to violate your license and never tell you; enforcement of the license in the case of non-distribution would be difficult to impossible without unauthorized trespassing on their property to obtain evidence (and then you have fruit of the poisonous tree).

The RPL tries to write into the license what many (most?) developers do anyway: if you're going to make any enhancements, give them back to the "mothership" so they can include them in the main product, thus making the main product better.

You'd have the following situation, thus:

(1) Individuals are so unlikely to be caught that, unless they are very careful and deliberately ethical, they will probably not respect the wishes of the RPL.

(2) Companies are less likely to knowingly violate the license because of the risk of possible damage to their reputation in court, but you may find employees within a company who choose to "risk it" and not inform their management about the risks of not distributing their changes to RPL'ed code. So you'd have a certain group of companies that WOULD contribute back (to comply with the letter of the license), and a certain group that WOULDN'T contribute back (and risk the consequences if it's ever discovered).

Funnily enough, you wind up with a very similar situation with the GNU General Public License or Affero General Public License 3.0, except that:

(1) Not distributing their code is *legal*, unless they distribute binary copies to someone who is considered not to be a member of the organization. So an employee can legally hand his coworker a binary-only copy of your product, but if he gives it to his neighbor or posts it on a mailing list, he would be obligated to include the source code, or at least an offer to provide the source code upon request (possibly with a monetary fee attached to the distribution medium; the fee does not have to be reasonable or cheap).

(2) Some third party who (legally) gets a copy of the binaries from them could go through the process of obtaining the source code, as they'd be entitled to, and they would then be free to hand you a copy for free (or not, as they please). Basically anyone willing to pay the distribution fee / ransom would be able to "liberate" the code by posting it on Github, and that would be 100% legal.

(3) You'd *still* have companies and individuals in both the "not sharing with you" and "sharing with you" camps, except that those who choose *not* to share with you are not necessarily violating the license of the code (unless they give you binaries only, and then cackle maniacally and refuse to hand over the source).

Enforcement concerns make the *expected outcome* of licensing under either the GPL3/AGPL3 or the RPL1.5 almost identical, except that you would be technically entitled to sue anyone you happen to witness as having made modifications to the original source without sharing those modifications with the original developers (you). But the likelihood of you actually finding someone foolish enough to violate your license, and then willingly share that fact with you, is pretty low.

Comment Not only smartphones, and not only personal device (Score 2) 227

Three parts to my post here. Part 1: WHAT do people (often) do that's against security policy. Part 2: WHY do people (or at least, me, and people I know) do it. Part 3: Soapbox ("wot I think"), aka why I think this type of policy is silly and what I'd do differently.

Part 1: The "what"

  - (Obvious, since it's in TFS) Using your smartphone/tablet while at your desk, assuming that's disallowed by policy.
  - Bypassing the firewall/proxy at work by routing through a remote server or VPN, using, e.g. stunnel, OpenVPN, or whatever else can be hacked up (worst case, build a website that accepts a remote webpage as a URL and tunnels all the resources through it).
  - Installing/running software, whether it shows up in Add/Remove programs or not, that isn't explicitly approved by IT management. Example: portable apps, VB Scripts, Java class files or JARs, .NET IL, etc. often fly "under the radar" of programs that try to detect and prevent the installation of unauthorized software.

Part 2: The "why" (from the perspective of employees)

  - People who want to "get work done", but need to access information out there on the intarwebz that happens to be blocked by an arbitrary and capricious firewall program, will acquire code, programs, or even just plain *knowledge* from remote third-parties, will do so using either proxy-bypassing, tunneling, or third-party Internet connections (like the 3G/4G data connection on their phone).

Often, people will perceive the monolithic "IT" organization as opaque, impenetrable, overly bureaucratic, and taking way too much time, money and resources to acquire the software needed, permit the actions needed, whitelist the knowledge sites needed, etc. in order for people to get work done. They may also have the idea (real or perceived) that the IT organization would actually prohibit the action they're trying to take, but they may feel that their decision is actually in the company's best interests.

They may (or may not) go through their own vetting process of the knowledge/software they are acquiring in order to determine if it is malicious or not, and once satisfied, they may implement it under the nose of IT. They might be doing this because they feel that the IT organization is being overly cautious or needlessly paranoid or poorly informed about the knowledge/software/code they are acquiring, and, given a limited amount of time and budget, they need to get their work done or they will be on the hook for not having it done when the deadline hits. I'll assign this category of activity the term "skunkworks" for the sake of brevity, with the general idea that these activities are actively beneficial to the organization, come with a low risk, generally have very little impact on IT infrastructure, and very high upside for the company.

  - People who want to participate in social networking, banking, personal email, etc. in cases where these services are blocked from their work computer, will often access them from a personal device, OR from the work device after taking the measures mentioned above. They are not willing to leave the work area in order to tell their spouse to order pizza tonight, order tickets to a baseball game, or check if they'll overdraw their checking account by stopping by the store tonight. This might also extend to watching a short Youtube video for pleasure, e.g. if you remember a meme and want to share it with a coworker because a conversation you had made you think of it.

They may feel that their actions are harmless to the company and benefit them, and are unwilling to give up this freedom for the sake of the company, because they need to live their lives and can't work eight hours straight like a robot without interruptions from real life. After all, even if they adhered strictly to the policy, they would have to spend a lot of time temporarily out of the office to handle these issues; the issues don't go away just because the employee is compliant with policy - their productivity would actually go *down* if they have to leave the facility to do these things.

I'll assign this category of activity the term "RL" (real life) for the sake of brevity, with the general idea that these activities are neither harmful nor beneficial to the organization on the whole (some of them boost morale while others just leech Internet bandwidth with no benefit to the company), come with a medium risk of security exploits (especially from the Adobe Flash and Oracle Java browser plugins), generally have a low to moderate impact on IT infrastructure, and minimal upside for the company (can benefit morale, but no valid business justification, strictly speaking).

  - People who want to do things that are *actually harmful to the company* (downloading/viewing illegal content, acting as a malicious insider to compromise the IT environment, leaking product announcements, etc.) will take many of the same measures as people who want to do things that are more benign. This needs little explanation. I'll assign this category of activity the term "bad actors" for the sake of brevity, with the general idea that these activities are very harmful to the business, come with a very high risk of security compromise, have a major impact on IT infrastructure, and a very high downside for the company.

TL;DR:
Skunkworks: Good for business, low risk if people know what they're doing, minimal impact on infrastructure, high upside for business if allowed to continue.
RL: Break even for business, low-medium risk if people know what they're doing, moderate impact on infrastructure, small upside for business (morale boost) if allowed to continue.
Bad actors: Terrible for business, high risk, high impact, high downside if allowed to continue.

Part 3: What I Think

Noteworthy is that the *technical means* employed by all three categories - skunkworks, RL, and bad actors - are very similar or even identical:

  - Using personal devices to transfer information.
  - Sneakernet.
  - Bypassing proxies or web filters.
  - VPNs to outside networks from the work device(s) (if possible).
  - Sites that *aren't* blacklisted on the Internet, but maybe the company would blacklist them if they knew about them.

The problem is that you lose the very significant upsides of skunkworks and (to a lesser extent) RL if you go to the ends of the earth in the quest to close potential spillage sources from the bad actors. You don't *really* think you can get your IT acceptance processes to be responsive and lean enough to work with skunkworks people, do you? Do you? Because think again. I've yet to see any company that can do it successfully.

There's also the principle best illustrated by Princess Leia in Star Wars Episode IV: "The more you tighten your grip [...] the more [leaks] will slip through your fingers." Basically, people are going to do skunkworks and RL things one way or another; if you close off all the legitimate ways for them to do it, they'll resort to desperate measures, and maybe even (intentionally or otherwise) spill some company secrets along the way.

Rather than crafting IT policies with the intent to say "no" to everything, instead companies should craft policies that are as open as possible, allowing employees great freedoms, while being agile enough to frequently update software (especially to implement security patches) so that users who are *not* bad actors will be secure in their web-browsing and video-watching. Deal with the people who waste too much time or are actively harmful on an individual basis. Those processes are self-limiting because those committing the acts will be fired for performance or disciplinary reasons anyway.

It's amazing to me that 90% of the IT organizations within large companies and governmental agencies still think that they can use a combination of strictly-written policy, fear, and a deep packet inspection firewall to stop people from slacking off at work or leaking company information. The fact is, you can't stop it; you're only hurting the people who would not abuse the privilege anyway, and would actively use it for good to benefit the company if your policies weren't so restrictive in the first place.

Sent from behind a restrictive proxy, while "defocusing" on personal time. I'll make up the time it took me to write this later in the evening, as always.

Comment Re:Stay Grandfathered into your unlimited plan (Score 1) 129

It's a bit off-topic relative to this OP, but I wonder how long it'll be until AT&T and Verizon just decide to unilaterally either (A) move people off of unlimited plans and onto limited/shared plans; or (B) threaten to cancel their service entirely if they refuse to move to a new plan.

As an unlimited subscriber, this prospect scares me a bit every time I think of it. I know that unless we're able to change the carriers' attitudes about unlimited (which is really an uphill battle for many reasons), the day will come when they pull this. AT&T, generally being the forerunner in screwing customers, will probably do it first, with Verizon following along like a loyal lapdog 6 months later (or thereabouts).

How long can this continue? I'm really not ready to move into a world without unlimited data, but it's coming whether I want it to or not. :(

Comment Re:A gigabyte is not worth a dollar, much less 10 (Score 0) 129

Frame it how ever you want, Libertarian free-market scum. You're so narrow-minded that you couldn't see your hand if you held it right in front of your face. To put it in terms that your tunnel vision paramecium brain can understand, people are rapidly starting to no longer be willing to pay $10 per GB. With the emergence of alterntives to doing so, it's only a matter of time -- a time short in comparison to how long they've been milking $10/GB -- until their market dries up. So they can either drop their prices or go out of business. The choice is theirs.

Picking one tiny piece of my post and applying your dogma to it while insulting me is not going to win you any arguments.

Comment A gigabyte is not worth a dollar, much less 10 (Score 1) 129

A gigabyte of data transmitted over the public Internet is not worth a dollar, much less $10. Carriers do not *need* to charge that much, but they choose to because it's profitable and you don't have any other choices.

Well, you do, kinda. You can "rent" an unlimited data plan from someone who has one grandfathered on Verizon or AT&T, on eBay. It'll be expensive, but if you use data like nobody's business, it's the best way to go. Don't do this if you plan to "sip" data, though, or you'll end up paying MORE than you would with a cheap limited plan.

Then there's T-Mo as someone else mentioned, but they have the huge downside that you get throttled after a certain amount of data. And the throttling is brutal. You can barely function after you're throttled for the month. It'd be fine -- great, even -- if they reduced it to 25% of full speed. But no, they bump you down to what basically amounts to 2G, if it isn't *actually* 2G. This is not a practical Internet connection for most purposes because everything will timeout trying to use it.

Then there's Sprint. I think they're still selling unlimited data plans without throttling *in most cases*, but if you're in the top 5% of users AND you're in a congested area (cell tower is saturated), they'll throttle you.

I'd rather have T-Mo OR Sprint over a "hard cap" data plan with overage fees beyond the cap. But my preference is still for the unlimited data plan I have from Verizon. I don't have any good suggestions for how to manage your data with a 2 GB cap because I would be unable to do that myself. I don't think it's a reasonable cap and it's not acceptable in 2015. The minimum plan should be 10 GB and they should make that as cheap as their current minimum plan.

The carriers have got to stop gouging the public for access to Internet services. It's killing the economy because so many other businesses besides the carriers depend on customers having unrestricted Internet access to profit from customers demanding their services.

Analogy: If you can't afford to pay the toll at the toll bridge, how are you going to get to the other side of the river and buy a new car at the dealership there? Well, you won't - the car dealership will go out of business for lack of customers. This is actually happening in the digital economy today.

Comment Other explanations for the data (Score 1) 300

The article offers a few speculations for why the data is skewed this way, but none of them are backed up by hard evidence, and there are numerous other possibilities that are no more or less plausible:

  - Maybe these early adopters are just the *wrong kind* - if there is some correlation between their buying habits and social attitudes, maybe these early adopters are not very social? A product becomes mainstream when it has word-of-mouth viral marketing. If the people you reach initially tend not to speak to others advocating products they buy, you're not getting the "multiplier" effect, where one early adopter can lead to 50 or 100 or 1000 second or third-generation adopters who buy it because a friend told them it was good. If this virtuous cycle never gets started, you rely on much less effective external marketing, like TV ads - people are bombarded with so many ads that we treat them with a natural skepticism and disdain now, so many people actively dislike the ads.

  - Maybe you determine who your market is by advertising? Some subtle cue in the advertising you're putting out, intentional or otherwise, could be attracting certain types of people while having attributes that strongly turn off the mainstream audience. If you're targeting the mainstream audience, your commercials cannot contain any feelings or opinions or even visual cues that the mainstream dislikes.

  - Maybe it's just random luck.

Comment Frankenstein or Dracula? You decide (Score 1) 484

My "build" of an OS out of constituent components would be:

  - Pure 64-bit; never never never never never any 32-bit support whatsoever throughout the software ecosystem
  - The Linux kernel
  - Solaris Zones (containers) able to host the latest Linux userspaces as well as an optional BSD and Solaris userspace with no virtualization
  - ZFS (okay, probably the latest version from Oracle is better than what Illumos has, so let's go with that)
  - An open-source version of Microsoft's WDDM as the graphics hardware abstraction layer (drivers are then built on that and are fully open source)
  - The best of Linux cgroups and namespaces reconciled with Solaris Zones
  - For a hypervisor (if you need to run Windows), Xen dom0/domU would be available
  - Dtrace from Solaris
  - kdbus
  - systemd core, but omitting many/most of the optional components (available as packages but not installed by default)
  - RPM for the package format, including Delta RPMs (drpms) for updates and LZMA compression on the package payloads
  - aptitude or yum for the package management interface / downloader
  - GNU bash
  - Entire system compiled with clang by default, but with gcc available as a working alternative (competition is good; One Compiler To Rule Them All is bad for progress)
  - A fully working, optimized, functionally validated Win32 and Win64 emulator (including graphics libs) supporting Windows Desktop apps that require any version of Windows from 95 to 7, for those legacy apps that just won't die
  - Both the latest open source versions of Java and .NET installed, available by default, and automatically updated with no nags, but with neither one shipping any browser plugins
  - No Flash!

Comment What's the next moonshot? (Score 2) 383

In the 20th century, humanity took a transformational step forward when it "went interplanetary". This impacted billions of lives and changed everyone's perspective about our role in the universe.

A lot of bad stuff happened, too -- weaponization of nuclear energy; oppressive governments; new tools like computers being twisted to serve repressive governments rather than the common man; continual and destructive wars; accelerating destruction of the environment and natural resources; etc.

If there's one objective -- one imperative with a positive end-goal that will transform humanity, or at least the way we think about ourselves, in a good way -- that the current and next generation should focus on, what objective do you think that should be?

In short, what should be our next moonshot as a global society? I say global because I believe any objective worth achieving at this scale cannot be accomplished even by a small cadre of very powerful advanced industrial nations. We would need truly global support for any initiative on the scale I'm talking about.

Comment Friendly OpenJDK Upstream? (Score 2) 328

Does anyone know if there exists, or can we start, a project like this:

(1) They distribute binaries for Windows (32-bit and 64-bit). Other platforms would be awesome, too, but Linux already has great OpenJDK support in package managers, so that may not even be necessary. Windows is the platform where it really sucks.

(2) They have a custom-designed updater that schedules itself to run every so often (say, every 2 weeks); launches; checks for an update; and then *EXITS* if it doesn't find one. If it does find one, it gives the user a simple "Yes/No/Ask Later" prompt: if they pick Yes, it'll silently remove the old OpenJDK version and install the new one; if they pick "No" it'll skip that version and only remind them when the next update comes out; and it'll bug them next week if they click "Ask Later". Once it finishes whatever it has to do, it EXITS, rather than remaining in virtual memory forever like the Oracle Java updater.

(3) No adware. All components free and open source software. Installer should only depend on FOSS (no InstallShield, etc.).

(4) Gives user the option to enable/disable Java plugins for each browser detected to be installed on the system, at install-time, and can be configured after install via a config GUI. Default should be to NOT install the Java plugins, since they have had a history of severe vulnerabilities, but users are free to request their installation anyway.

(5) Installer should come in two forms: a "net installer" that has a tiny size (1 MB or less) and only downloads the requested components at runtime (allowing user to select whether they want the source code, the JDK or just the JRE, etc.), and an "offline installer" that contains the entire kitchen sink and does not need Internet connectivity (for environments behind a restrictive proxy, or no network connection).

(6) User should have the option to install OpenJDK without admin rights! If they don't have admin rights, stick it in AppData\Local and put the plugins in a similarly user-scoped folder (not possible with IE as far as I know, but should work with Chrome and Firefox). Auto-detect whether the user can be an admin, and only give the UAC prompt if the user's account can actually accept the prompt; otherwise, fall back to "non-admin" install.

Gee, sounds like if nothing like this exists, I have the requirements / design doc in my head...

If I disappear in my room for a week and don't emerge until this thing is on github, tell my family and my cat that I love them.

Comment Re:Containers can be VMs *or* apps, Docker. (Score 1) 48

I'm well-aware of the advantages of SmartOS, actually. I am in the process, however, of migrating my dedicated server from one system to another (I'm upgrading the hardware, while staying with the same hosting provider). In doing so, I've made the difficult decision to move *away* from SmartOS, and back to GNU/Linux, for the following reasons:

(1) Despite promises to the contrary, compiling most C/C++ FOSS is *not* easy on SmartOS. Also despite promises to the contrary, a vast amount of FOSS that I need is *not* available in SmartOS's repositories, nor in any SmartOS equivalent of Ubuntu PPAs. 99% of projects need extensive source-level patching, awful environment variable hacks, symlinked binaries, edited configure scripts, or some nasty combination of the above. Some extremely complicated projects like PhantomJS are nearly impossible to compile on a system that is not the Linux kernel, with glibc as the C library, libstdc++ as the C++ library, and gcc 4.x as the compiler. I cannot use any alternative programs for some of these use cases, though, and PhantomJS isn't the only one -- I simply don't have the time or willpower to spend weeks fighting horrendous build environments that are opaque to diagnosis to do something that I could accomplish with "aptitude install phantomjs" on Debian or Ubuntu or Devuan or Mint or... you get the point.

(2) Although the kernel and core system was sound, I experienced inexplicable random crashes of a 4 GiB kvm guest running Windows Server 2012 on SmartOS. I tried fixing this numerous ways by updating the host OS, updating kvm, looking at logs, reducing the amount of RAM assigned to it, etc. -- but after about 24 hours of uptime, the VM just crashes (on the host side). I don't experience this with KVM on the Linux kernel. And for various reasons I can't *not* have a Windows VM for certain limited use cases on my server. It's a multipurpose box and it needs to be able to do a lot of different things. I don't have the money to buy a dozen different boxes each filling its own little niche.

(3) To run the aforementioned programs that are infinitely resistant to compiling cleanly on SmartOS, I ended up firing up a paravirtualized Linux kernel (CentOS 7) on top of SmartOS. This ran well enough, but it just felt *unclean* to need to run a UNIX on top of a UNIX, when all I'm doing is running FOSS programs. Although there is that one program that's binary-only which absolutely *does* require Linux...

(4) I tried messing with lx branded zones, but could never get it to actually do anything useful except print error messages. I of course googled those error messages, asked about them in IRC, and the like; but the most I could get was someone saying "Huh... that's strange." No offered solution. lx branded zones have a lot of promise if they can emulate an actually modern GNU/Linux distro such as CentOS 7 or Ubuntu 14.04.2 or Debian Jessie, but until/unless they can do that, with few or no gotchas, I really can't be bothered to mess with them in an alpha/experimental state.

(5) lxd to the rescue! Canonical (officially the "Linux Containers project") is working on a daemon called lxd ("lex-dee") which brings sanity and proper isolation (through use of already-existing kernel resources) to the lxc project. The `lxc` command that comes with `lxd` operates very similarly to `vmadm` in SmartOS, and the guarantees that you expect for isolation in SmartOS are pretty much true in lxd guests as well. At their core they just use mainline Linux kernel features; the difference is that lxd actually uses these facilities *in a smart way* to isolate guests. Docker on the other hand, encourages the guests to be friends with one another. Yuck.

So:

- I had problems with SmartOS in actually using it, despite a promising and rock-solid stable base system. It needs a lot (and I mean a LOT) of work for compatibility with existing FOSS and/or better support for lx zones based on modern (recently-released) Linux distros.

  - I still have ZFS on Linux and it works great.

  - KVM isn't broken on Linux like it is on SmartOS, so I can have my Windows VM and not constantly have to `vmadm start $WINDOZE` when it crashes.

  - Ubuntu's lxd is a viable isolated container solution for me on Linux, using the mainline Linux kernel.

  - By using native Linux, I don't have to virtualize a Linux kernel, so any software I want to run that happens to depend heavily on Linux will "just work", and I likely won't even have to compile it thanks to everything under the sun being in a PPA.

  - Rebootless kernel updates :D

P.S. - my use cases for this server are varied and diverse, from gaming to music streaming to file hosting to VPN to IDS to .... well, you name it. "But it's got nginx in the package repo!" isn't enough package support for me.

Sorry, SmartOS, but it's not ready to meet my needs, and Ubuntu Server 14.04.2 definitely is.

Comment Re:Containers can be VMs *or* apps, Docker. (Score 1) 48

They seem viable enough; all the prerequisite container isolation concepts seem to be implemented, though I'm not sure if there are any hidden "gotchas" where certain resources would not be isolated. I'd have to investigate more.

Then I'd have to learn all the different system administration concepts and commands for using an entirely new OS that I've never used before. I've used Solaris (and variants), about 9 Linux distros, Windows, and Mac, so maybe I'm more qualified as a "new platform learner" than others, but it's still not really something I wanna do.

Especially considering there are about a dozen different, extremely complicated software products that I want to run on this system, in the containers (not in hypervised VMs), which are either binary-only Linux/ELF executables, or source code that's so resistant to running on non-Linux that it's ridiculous (spent the better part of a week trying to compile one of these programs for SmartOS).

So yeah, not really going down that rabbit hole, sorry.

Comment Containers can be VMs *or* apps, Docker. (Score 5, Interesting) 48

Unless this unified "Open Container Project" supports both the unprivileged, isolated "machine" concept of a container AND the trusted, shared "app" concept of a container, it's going nowhere fast for me.

Solaris Zones. linux-vserver containers. Now Canonical's lxd. Few of the participants in the container effort, except these three, seem to understand the value of having containers as *machines*. Give each machine its own static IP, isolate all its resources (memory, processes, users and groups, files, networking, etc.) from the other containers on the system, and you have what's basically a traditional VM (in the early 2000s sense of the word), but with a lot less overhead, because no hypervisor and only one centralized kernel.

Docker seems to pretend like VM-style containers don't (or shouldn't) exist. I disagree fundamentally with that. I dislike that Docker pushes containers so hard while ignoring this very important use case. I hope the rest of the Linux Foundation is smart enough to recognize the value of this use case and support it.

If not, I'll just have to hope that Canonical's lxd continues to mature and improve.

Slashdot Top Deals

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...