> So what exactly is Nebula?
From what I understand (and I may be mistaken):
Nebula in this context is a company who has released a product called "Nebula One," which is a server setup. Nebula One runs OpenStack and some proprietary Nebula software.
The target audience for Nebula One are the companies who are inclined to outsource server stuff to "the cloud" - ie, don't want to worry about the work/responsibility to maintain their own servers - but also don't want to have their information in the hands of a third party such as Amazon. Nebula One is intended to be a "it just works" server solution for many cloud-computing type things, such as managing many VMs or data redundancy. Nebula also provides support for the product. Ideally, the best of all worlds: ease-of-use and dedicated-support of clouds without the privacy or dependency issues.
I kind of think of it as the RHEL of cloud-related-server stuff. If you're savvy enough and do-it-yourself enough, you might be able to build and maintain something comparable yourself. However, some companies are in a position where paying another company for the setup work and support is a better choice.
> That article is horrible.
> Is it just me, or does this new 'cloud' tool have absolutely nothing to do with OpenNebula (which abbreviates itself ONE), a competitor to OpenStack?
I'm not familiar with OpenNebula, but from the sound of it - no, they're not related, beyond being a competing codebase. They might have a similar etymology, though. NASA worked on a "Nebula" computing platform which is related to OpenStack - I wouldn't be surprised if OpenNebula also spun off from NASA's Nebula. The CEO of Nebula-the-company worked on (headed?) NASA's Nebula.
There's also an Eve Online group called "Nebula". The product "Nebula One" has some legitimate promise IMO, but the name is ripe for causing confusion.
Oddly enough, Micro-USB was specifically designed with the exact complaint you have in mind. While it is smaller than the Mini-USB it replaced, that was secondary to its main purpose, which was to improve durability. Not only is it supposed to be more durable in terms of the number of times it can be inserted/removed, but it is also designed such that, when it fails, the (most likely cheaper) cord will be the part to break rather than the (most likely more expensive) device. I'd cite a source but I can't pick which one - look up any documentation on Micro-USB and you'll read the same thing.
For what it is worth, in my personal experience, I have not seen any such issues with Micro-USB. The only times I can recall in which I've seen them fail have been because the cord itself - well away from the connector - was damaged, such as by a wheeled chair rolling over them. However, I have seen a number of the just replaced Apple connectors causing issues. For example, this summer, I've seen an iPod where the connection on the device - not the cord - was bent to one side so that the cord-side connector would not fit in. Mind you this wasn't so terrible - I repaired it with a thin knife and a careful hand - but still, in my personal experience, the old Apple connector has a significantly worse record than the Micro-USB.
While I'm blessed enough to have full functionality of both my arms, I have repeatedly run into situations where I am significantly more skilled than those I am playing with, and to keep things interesting, restrict myself to one hand when playing a number of games. While I am significantly better with both hands, it is not impossible to be somewhat competitive in many games with only one hand. Occasionally I've found myself (successfully) using these techniques in tournament matches when I feel a sufficient need to make a point. Moreover, I have in the past found myself with a pressing desire to play a new game, but absolutely no spare time, so I double-up eating with playing. For example, I beat (the gamecube version of ) The Legend of Zelda: Twilight Princess entirely with only my left hand, such that I could use my right hand to eat during my lunch breaks.
I figured I could give some advice as a result of my experiences.
First off, I should note that I primarily do this on a Nintendo Gamecube controller (as I am an avid Super Smash Bros Melee fan), but in my experiences they do translate well to the Xbox360 controller. I have not given serious consideration to a Playstation controller, and naturally this will not translate to keyboard/mouse.
Secondly, I should note that I've yet to find a good way (assuming an unmodified controller) to have immediate access to both sticks and all of the shoulder buttons simultaneously. Typically, you'll have to limit yourself to delayed access to something. This is a limited issue in some games (e.g.: fighting games), but can be very pressing in other games (e.g.: first person shooters).
You did not state in TFS which hand you have lost, and so I will cover both hands. There are two main grips I use, depending on the specific situation in the specific game.
The primarily left handed grip: Sitting down, place the controller on your left thigh or knee. Place your pinky on the left stick, thumb on the right stick, and pointer finger over the main part of the face buttons. To access shoulder buttons, have either your pinky or pointer finger reach over and around the controller to the appropriate button. This will most likely feel awkward at first, especially using the pinky on the left control stick, but I assure you with practice it is quite possible to become adept at it. The biggest limitation is the reach time for the shoulder buttons.
The second grip left-handed grip is a modification of the way the left hand typically holds the controller. I have often heard this referred to as "the left-handed claw". Instead of using the thumb on the left stick, slide it down to the directional pad, and use the pointer finger on the stick. If you try to also cover both left shoulder buttons you'll find you only have the pinky to provide support - rest the controller on your leg. The obvious limitation here is significant lack of access to the right side of the controller. I use this in SSBM for wavedashing when needed (jumping with up).
It is quite possible to switch between these two grips on-the-fly. While you'll have at best a delayed access to any given input device on the controller, you will have access to everything. With practice, I'm reasonably confident someone with one hand could progress through many 360/NGC games built with the intention
The right hand is, in my opinion, significantly harder to use, but not impossible:
The primarily right-hand grip: Rest the controller on your right thigh or knee. Place your pointer finger on the left stick, your ring finger on the right stick, and your pinky over the main face buttons. Like the left hand, reach over the controller with either your pointer finger or pinky when needing to reach the shoulder buttons.
I cannot think of any games that are completely playable with only the right-hand claw (see the left hand claw above for reference), so I won't really cover it. I should note that, when playing with both hands, I use the right-handed claw. However, most games that are okay with a delayed access to shoulder buttons could be playable with the primary right hand grip described above.
Whether the amount of practice needed to be proficient with the technique I've described above is worth it is up to you, but do believe me when I say that with practice it is quite possible for many if not most games.
As fun as it is to bash Microsoft, they're not the only ones who do this. Presumably there is some technical reason why this is done, but I am at a loss for what this would be. Would someone be able to explain to me the reason why such limits are put in place?
It seems with modern computer capability that absurdly long passwords would be trivial. The hashed password length would be the same irrelevant, so I can't see storage space as the issue. The only other idea which comes to my mind is the computational difficulty of hashing the passwords, but even that has to be trivial by today's standards, even with millions of users hitting the servers. Why not go overboard and just allow several kilobytes worth of password?
I see so in practice you don't think they'll end up forking too much in terms of dependencies. I think you're wrong there but at we'll know in a few years. I've had huge problems between supposedly identical Linux systems like RHEL 6 and Cent 6 on things like their minor differences on Java libraries. I know Slackware had horrible problems with Gnome.
Not exactly, but close, at least if I am understanding you correctly. The idea is that things which have dependencies which would be issues across distros will simply reference their dependencies from their own distros. So if you run a RHEL6 java program, it will use RHEL6 java libraries, and if you run a CentOS6 java program, it will run CentOS6 java libraries. I should emphasize, however, that you still end up with an extremely large number of things that can interact transparently; my favorite example of this being an RSS reader which can open a web page in a browser, where both the RSS reader and browser are from different distributions.
We are down to a fundamental "will it work"?
I readily admit the idea I've had is quite a bit out there, and so I do not fault those who do not take me at my word, but I can honestly say that, for the use-cases I have tested for the past few years, it works. The current release is more or less identical to what I have been using for the last nine months on all of my personal computers on a regular basis.
So anyway project sounds cool. I know there are people who really really want to be able to combine features from different distributions.
I'm glad you feel that way
I load up my X. Do I load X11R6 or R7? Presumably I can pick.
It will default to a native version if available (so if you're in Mint's
I'm not sure there isn't kernel stuff here but lets assume not.
Depending on the software you want in the various distros, there could very well be both an upper and lower bound on the kernel version, but the range is sufficiently large that I've never actually run into it.
So I pick R7 and that doesn't work with my Gnome 2 which was built around R6.
Surprisingly, most of the software I have tried this with is fairly standardized and backwards compatible. While I am sure you can find things which will conflict, it has been surprisingly rare in my experiences thus far. I have played quite a bit with running older software in newer X11s, and newer software in older X11 with surprising success. The pool of software I have tried this with may not be large enough to catch issues which do exist, though; I admit such things could very well be out there. In the worst case scenario, should they exist, and if I cannot find some other solution, you can always just use them in the groups they have to be bunched in. So maybe one version of a DE has to be paired with one version of X11 - you can still pick from every other distro for just about every other thing.
I guess my question to you would be why use Linux as the base here? Why not us an OS which just spins off these various distributions as virtual?
I'm not completely sure I understand what you're asking. I'll throw some answers around and see if one of them gets lucky. I'm using the Linux kernel because it has the widest range of userland software which could benefit from this approach. While there is quite a bit of value in, say, the BSDs, Bedrock's system will not really benefit them nearly as much.
Visualization creates a substantial amount of overhead, and purposefully separates to a degree which harms the integrated feeling I wanted to give. For example, once Valve brings Steam to Linux, I'd like to play some 3D games that would not work well in a virtual machine, but work fine with the chroot system I'm using. Moreover, the way that the programs from different distros can all interact as though they are all in the same distro would be significantly harder to achieve with visualization.
If that did not answer your questions, feel free to ask again, preferably rephrasing them and/or pointing out how I've failed to properly explain the matter at hand, and I'll do my best to try again.
Hi since it sounds like you are actually familiar.
Yes, I'm the lead(/only) dev. I've had at least one person note distaste at the fact I failed to specify this in the summary - I apologize for not making it clear.
How is Bedrock going to do this?
I know it is against slashtdot tradition, but if you RTFA you'll have what I hope is a sufficient answer. Irrelevant, I'll take a crack at answering your questions directly:
By manipulating the filesystem (through chroots and bind mounts) as well as PATH manipulation, Bedrock will make arbitrary commands from other Linux distributions which are on disk available in a fairly transparent fashion.
For example to get cutting edge features to work you often need to put in destabilizing patches to the kernel or use newer more buggy versions of core libraries and utilities. This is why installing some little thing from Debian unstable on a Debian stable box can trigger 100 package upgrades as chains of dependencies get worked through.
This is very true. What I'm doing is grouping executable with the rest of their distro. Each program gets the dependencies it needs from the same repository it came from. There are downsides to this, such as a substantial amount of duplication, but it results in a nice clean way to ensure everything has what it needs to run. The magic comes in the way I use special stuff in the PATH and bind mounting to ensure that most things in one distro can run in others.
Essentially it means creating an entirely new level or complex special instructions on a per package basis which make automatic dependency resolution not work as well.
Since the entire client distro is available on disk, I just let it figure out dependencies as it normally would, making this largely a non-issue.
If I've failed to explain it, and the linked explanation does not suffice, feel free to ask me again, with an emphasis on pointing out how my explanation failed.
If I give up a number of things, including but not limited to the three items I've mentioned in the prior comment, it sure is possible, but it'd be far less pleasant of an experience.
If there is sufficient interest, and time is available, I could most definitely consider providing something which has as much functionality I can muster which could sit on top of a major distro, although a sufficient amount of functionality would be lost that I could not honestly say I would enjoy using it.
You aren't answering my specific point. Ubuntu and CentOS v6 are both using Upstart. Fedora uses systemd. Gentoo uses OpenRC. If you support only sysvrc, then you're supporting only Debian...
I'm not intentionally avoiding it. I am not supporting only sysvrc, that was simply an example. Ubuntu's upstart, as another example, works fine. I don't have systemd or openrc on my system at the moment to give you a definitive "yes they work as well", I fully plan to add support for them. There are a lot of Gentoo fans who have offered to assist me.
Yes, that's what I was referring to. *NO* it's not trivial at all. The stuff from systemd is very different form initrc, and you wont succeed in having something that works just by specifying "things by hand". You really need to address the issue of events, not only the order used to start daemons.
I've been using this stuff successfully for years I've been using the first alpha release for the last nine months alone before I released it. From those experiences, I have found that manually solving boot dependencies has been a trivial matter. Perhaps I'm still misunderstanding the exact issue at hand, or maybe I've just been absurdly lucky and I'll run into significant issues later.
I understand that you have integrated stuff to make it easy to use a chroot.
Not necessarily easy, but to make it all feel cohesive. But perhaps I'm over analyzing that specific sentence and you are aware of my goal.
But if you aren't addressing the system start,
I should indicate here that, as far as I can tell, I am, at least to my satisfaction. Perhaps not sufficiently for your purposes, at least not with the first release.
then why not having Bedrock as a simple package for let's say Debian
I think I delivered a satisfactory answer to this and most of the rest of your post in another thread of the conversation. If I haven't, I'll take another crack at it. It could very well be my logic for why I have not done this is faulty and I just didn't quite grasp the reason why yet. A few other items I would like to address:
replace GNU tools by busybox (a poor choice, IMO),
I wanted to keep the base as simple as possible With busybox, I can update nearly the entire userland by replacing a single file. The functionality busybox is lacking is a non-issue, as this is provided by clients. I do not intend for anyone to actually use the busybox commands for much at all. When I used to use Debian as a base so long ago, I found that upkeep for Debian was far higher than necessary. I never really used it - it just did the init and that was it. However, when a release was EOL'd, I had to go ahead and try a dist-upgrade or reinstall, neither of which I found appealing, as the side effects could be far reaching. Replacing one busybox binary with another is much, much cleaner.
I'm not dismissing your work here, I'm trying to understand your design choices, as I wouldn't have do it the same way.
I appreciate that fact. You've been more than patient with my thus far unsatisfactory answers.