Beh, on my Fedora 20 box it's encrypted
1. NetworkManager can do both
2. Passwords are _always_ stored with reversible encryption algorithm
3. Solution: KDE uses kwallet and f*cks my brain every time i want to connect to my wifi
Mere mortals use interface, they like ubuntu because some ubuntu-only options and free cd's, because it's popular, not because it's opensource or just "good".
So, it's time to think about middleware, improve everything that is common for all distros and improve standard, force every distro to be visually interoperable. Nobody cares about dpkg or rpm, kernel version or canonical. Users care about what they use. They use desktop. They like Gnome? Improve it, make it same everywhere.
Why should I care about Ubuntu or Opensuse if I use Gnome on both? Where's the difference? That's the issue community should resolve and it's going on and on for last 5 years - we've got middleware everywhere - in packaging, sound, KDE, Gnome etc.
I guess Canonical realizes, that one time usability war will come to the end and this time has come. Not it must create something that will force it's old users to use new versions. That's why they are so crazy about Mir and pushing Ubuntu-only software. That's the reason.
Ignoring LSB is evil. RPM is LSB-aware.
RPM is great, at least it's documented very well (where are DPKG docs? it took 18 years to write this bullshit i see?), refreshed, growing, adding functionality, improving usability etc
Yum is not sitting on top of it and it doesn't mother. There is PackageKit, which does anything middleware interface must do - it supports rpm, dpkg, apt, yum etc - that's what is important for you, because YOU ARE loser. You never use dpkg, never building any packages, you just can't compare RPM and dpkg, so, what's your problem? Use GUI and relax.
So far RPM made many HUGE improvements, including deltas, macro improvements, build-dependency automation, format changes, compression, metadata, language bindings, dependency awesomeness, middleware, it's classy and MUCH better than dpkg from technical point of view. Need user point of view? Use GUI and shut up.
So I don't think you have ever gone through RPM or dpkg docs.
Fedora 3 -> Fedora 19 (release-by-release)
and I will upgrade to Fedora 20 in few months
You are comparing Enterprise, Server Software compilation and Desktop. Or T-72 Tank to the BMW M5. It's not the same. RHEL/CentOS are not like Ubuntu's. Even LTS is far from that level.
CentOS is tied to RHEL, which has lifetime of 13 (!) YEARS. Ubuntu Desktop and Ubuntu Server have 9 MONTHS (!). Ubuntu LTS has up to 5 years (8 years less than RHEL).
How do you imagine upgrade from the system which is 10 years old? It's very likely that components will be incompatible.
Can you do upgrade from Ubuntu 1 to Ubuntu 13? No.
RHEL is made for not doing any upgrades to have stable, tested and security-only upgrades to be sure that new functionality won't push more problems. That's the idea.
If you need desktop with fresh upgrades, crashes and etc - CentOS/RHEL is not an option. None of the distros with lifecycle >12 Months can accomplish this.
That's untrue. AM3+ sockets or CPU pins can't handle that power.
CPU's are powered with less than 5 Volts. In case of 220W it's 220/5 = 44 Amps!!!!
OK, pins are short and 44 amps might be possible, but powering such device even with a multilayer M/B will be scary and still - doesn't look like real.
Not really only with startup only and windows doesn't arrange data effectively enough, nor linux does.
I've got 32 GB RAM on my desktop and I'm using green drives. It's terribly slow, always - when loading applications, when using virtualization etc. On both Linux and Windows. It's effective for some things (Read only and if it was read before), but not for everything.
For servers - it's even worse, some people bypass RAM and write data directly to the disk, reason - safety.
Yep, and flashcache (facebook).
But none of these are included in the kernel.
Picking up tiny SSD is not a solution for notebook or for the desktop - you need more ports, more money, more energy etc
That's correct. Non-fw cache will be more effective.
We don't know (yet), how WD drives work - will we see one whole block device or maybe it will have two SATA ports with two separate drives. I'm not sure they will support access to SSD to the some encapsulated stream.
There are Linux solutions like bcache and flashcache that can deal with cache. So, maybe it's time to use it and include into kernel?
CentOS is not a Fedora derivative, there's a huge difference. CentOS is just a rebuild of RHEL's src.rpm's. RedHat must provide sources for all the changes, but not binary packages, this is how CentOS is born, same with Scientific. Yellowdog is created for PPC arch use.
Mandrake is/was BASED on RedHat 5 (do not confuse with RHEL 5) and has nothing to do now with RHEL. It's a separate product which has it's own commercial way.
Suse was initially same as Slackware, but now it has nothing to do with Slack, and has absolutely different approach.
So, again, why should I have 100 derivative distros doing almost the same, instead of having two with remixes and subrepos?
Testing is for accepting into Stable, so, it's the same without major feature changes and some fixes: http://www.debian.org/doc/manuals/debian-faq/ch-ftparchives#s-frozen Unstable is more like development version, not the "Debian Fedora". If they're paying attention, if everything is fine, then why do we have aptosid in here? According to distrowatch, Debian has already spawned more than 120 derivatives (c) debian.org (!). RHEL/Fedora have 28, and most of them are very specific, like trixbox, clearos, Yellowdog, centos, scientific. There is a huge diversity in Debian and community is creating general use distros outside the Debian project. Why? Is it so hard to allow maintainers to deliver changes and add-ons via subprojects? At least two mainlines. I guess it's because of Ubuntu, and there we've got LTS. It's separatism and very closed approach, like Oracle is playing with MySQL.
It's a real torture. People are switching from one derivative distro to another, adding 3rd repositories, switching to testing repos etc. All this stuff brings more incompatibilities, decentralization and divides community forces.
Ubuntu is Ubuntu and there are LTS and short lifecycle versions of distros. it's not Debian at all.
Is it so hard for Debian to have two main distros like "Debian Stable" and "Debian Mainstream"? Everybody will be happy. I've heard another post-self-mortification words like "you know we've got 236272927 packages in here". Oh yes, it's scaring, you are using only 5% of these packages and 95% of users - about 15 I think... Who cares about the rest? Why are they so blind?
5 Amps is too much for the standard cable and will require safety considerations such as additional pins, available only on compatible thick cables.
I don't think it's a good idea having 5 amp connection. It's like a gently 1 KW barbecue grill and desperate housewife's desperate iron connected to the standard power outlet - it's unsafe: think about all the problems with overheating, contact problems and sparks.
Also, this standard can be implemented only on new desktops - none of the mobile computers will provide quality ports capable of such current and none of them will have additional 100 watts of power in reserve in case of use of battery and even for power supplies - their size will double.
How many devices, requiring such power have you ever seen? Monitor? Is 16-24 volts a good idea and enough?
I think better idea is to introduce 48V power in modern power supplies:
-> It will provide 100 watts of power using 2 Amp cables - smaller footprint and more safety
-> 48 Volts is a good option: this voltage is "standard" and used in many devices and you can easily get 24 from it, much easier than 48 from 24, it's efficient
Dunno, I think this USB guys are bringing new ideas and are not ready to finalize standard, yet...