How long are they going to keep charging such ridiculous prices for 10 gig networking?
10 gig copper has been out now longer than it took 1 gig copper to go from being "ooh, enterprise" expensive to being in every $499 laptop you could find. Yet they've managed to prop up 10 gig switch and NIC prices forever.
Which 10gig? There were several different ones, that have finally narrowed down to 2. With one of them being fiber... And...
Fiber sucks for endpoint networking. People step on network cords, tug on them, twist them around. Fiber can not do this.
And what are you going to push over it? Unless you have fast SSD, you can not even fill a gig. And to get full 10g, you need some super fast memory and buss speed as well as an SSD array.
With many modern OS's adding spying and telemetry features and then disabling all the tried and tested methods to bypass them it may wind up that the router is the only way to retain our digital privacy. So yes, I think open source networking has great utility.
True, but how is this better then the wealth of FOSS router projects our there now? SmallWall, t1n1wall, pfSense, OPNsense, BSDrp, OpenWRT, DDwrt, Untangle, or any *nix with routing turned on?
Raspberry Pi is not an open, depends on closed source blobs in firmware and drivers. Stop spreading the lie
An Intel Atom motherboard is much more open, and MUCH more powerful. And more expensive. Cheap goes a long way, and that is a problem for the posters of this slashvertisement.
Particularly with the FCC racing to lock down router firmware,
Which is a damn good reason to separate the router and the WiFi. The FCC can not do shit about http://www.smallwall.org/ or http://www.pfsense.org/ or any other router that works better than most commercial offerings on old cheap retired desktops.
With subversion at least, the entire project's history is stored in a single place, and sure, you could make backups of it, but you have to stop the server to make sure your backup is consistent
No, you don't. I have a project with 2 SVN servers. One is the development server, and it is the place where commits happen. After they happen, the SVN is rsynced to the public server where anyone can checkout, but no one can commit. It happens live and while running, and it works every time. And it is a backup, on top of the snapshots of the disk image of the VM running BOTH of those servers.
It is only as fragile as your environment.
In some ways yes, in some ways no. Large history can be an issue, but to get to that point you pretty much need to be doing something pretty special for a fortune 500 company. The entirety of the Linux Kernel clocks in at under 2GiB, the only company I've ever heard make the claim that this is an issue was Facebook, who went with Mercurial instead.
This changes quickly if you store binaries in your repository. (And yes, there are sometimes good reasons to do that) And this can be executable binaries, compressed files, images... And if they change a lot, you can get something very large, very fast. And since devs love those thin laptops with SSD drives, (Let's face it... So do I!) that can get ugly quick!
This is a consequence of how easy it is to branch and merge using git. I know subversion has branches, but they can be harder to deal with and it's hard to spin up a branch for every feature and patch.
In some cases, this can be a good thing. Easy branching can lead to code sprawl... If you want to more tightly control a project, branching my need to be controlled.
I wouldn't recommend anyone roll their own svn+apache system. It's not worth even ten minutes of your time when those tested, out-of-os-distro stacks are available free.
Because consistency among your distros is overrated anyway...
Mercurial: I personally haven't seen any other VCS easier on windows
Subversion is easier. It does have less features, but from ease of use, that can be a good thing!
"Stupidity, like virtue, is its own reward" -- William E. Davidsen