This has already been answered here: http://marc.info/?l=openbsd-mi...
Do you also like the tree conflicts you get when moving directories around in your project? Those are my favorite thing in SVN. In theory, they've made this better in the new version but I'll believe it when I see it myself. That's something about SVN that just really pisses me off.
Tree conflicts are inherent to any version control system, not just Subversion.
People complain about Subversion's tree conflict handling a lot. I believe this is because development work done so far was only about detection of tree conflicts, leaving many users helpless because tree conflicts can be complex and hard to understand and resolve.
The 1.8 release is taking some steps towards the eventual goal of helping users resolve any tree conflicts instead of merely detecting them. If you move a file or directory in Subversion 1.8, and need to update the working copy before committing the rename, and the update receives edits for the renamed file or directory, you'll now see a menu which allows you to apply the incoming changes to the renamed location:
Tree conflict on 'foo.txt'
> local file moved away, incoming file edit upon update
Select: (mc) apply update to move destination, (p) postpone,
(q) quit resolution, (h) help:
This only works for 'svn update', however. It doesn't work for 'svn merge', yet. That's planned for a future release.
It's ok to create a branch (say from trunk), work on the branch, and even update it from trunk, but you're pretty much limited to a one-time reintegration merge from that branch back to trunk.
This limitation has been lifted in 1.8. As long as you use no merge commands more complex than
svn merge ^/branch/to/merge/from
the merge works in either direction and should never flag spurious conflicts caused by changes being applied more than once.
There were ways to work around deleting a reintegrated branch in 1.7, by the way. But the new 1.8 merging works better. Just forget about --reintegrate, don't use any -r or -c options, and you can always merge in either direction without any need for deleting branches.
I'm a Subversion developer and would like to clarify this bit of the summary:
Major new features of 1.8 include switching to a new metadata storage engine by default instead of using Berkeley DB, first-class renames (instead of the CVS-era holdover of deleting and recreating with a new name) which will make merges involving renamed files saner, and a slightly simplified branch merging interface.
The "new metadata storage engine" probably refers to FSFS which has been the default repository backend since Subversion 1.2. FSFS has been improved since then, and 1.8 contains some new improvements (such as directory deltification) but does not add a new repository backend. The BDB-based backend is the one from Subversion 1.0 and is rarely used these days.
Subversion 1.8 doesn't contain support for "first-class renames". Renames are still modeled as copy+delete, with special annotations. The working copy is so far the only part of the system which is aware of moves. There are plans to make other subsystems aware of moves in future releases. Also, while tree-conflicts involving local moves can now be auto-resolved after 'svn update', 'svn merge' mostly behaves as it did in 1.7, expect there is no need to use the --reintegrate option and tree conflicts are now flagged if a directory was renamed or deleted on the merge source branch. Whereas Subversion 1.7 would unconditionally perform the deletion in the merge target.
a window into the nationwide scope of the FBI's surveillance, monitoring, and reporting on peaceful protesters
Here's another such window:
http://events.ccc.de/congress/2012/Fahrplan/events/5338.en.html
http://mirror.fem-net.de/CCC/29C3/mp4-h264-LQ-iProd/29c3-5338-en-enemies_of_the_state_h264-iprod.mp4
I'm sad to hear that someone ripped off your work and resold it as their own. That's unjust, and it's one of the inherent risks of open source development.
There is a healthy variant of this where companies build a product based on an open source code base, something that adds value but is doing something that the community around the open source project isn't interested in doing. Many companies do this, including Facebook and Yahoo, who fund development of of e.g. Apache Hadoop, and Apple who are using BSD and Apache-licensed code in OS X. If you're doing this well, you feed back any changes the community might be interested in. And that doesn't mean just dropping some code on their lists and walking away. You need to interact nicely, react to community feedback, and eventually become part of the community and share some responsibility.
Whoever sold your work as their own took the irresponsible and damaging route with the above approach, looking for short-term profit only, with no interest in supporting the original project. To fight this, you can use a copyleft licence and enforce it if it is violated, and/or build a community that is strong and dedicated to supporting the original product (this is why new projects at the Apache Software Foundation go through an incubating phase that builds up a community around the project -- the project graduates once the community is deemed healthy). As an additional lever, you could also trademark your product's name to ensure that others who use your work cannot use the same name for their own product but must rebrand it.
You can also sell services that relate to the software. E.g. where I work we sell support and consulting for open source development tools (svn, git, eclipse, and the like). We also contribute to some of the projects we sell services for, so money people pay for our services partly funds further development of these open source tools. We make sure clients are aware of that, and they are usually quite happy about getting support from someone who is a developer on the project. This gives us a small competitive edge over others who sell consulting for these open source products but don't interact with the open source community.
An excellent description of the role money can play in an open source project is given by Karl Fogel at http://www.producingoss.com/en/money.html
.
The ultimate issue is that renames (or moves) are implemented as delete+addition operations. Maybe back in the day, that appeared to be ok, but now its obvious it's a large failing.
That's not the problem. Mercurial also does this, and nobody (at least on slashdot
The problem with Subversion's implementation is that people are much more likely to run into some of its very annoying shortcomings in practice. But there are only a handful of cases which Subversion needs to handle better to catch up (though making these cases happen is a lot of work, see my other comment here: http://news.slashdot.org/comments.pl?sid=1934004&cid=34755596).
Take a look at the tables on pages 29 and 30 in the thesis pdf. Note that these were based on Subversion 1.5 behaviour. Subversion 1.6 already detects more of these cases than git and hg combined, but it doesn't even try to automatically resolve any of them, even trivial cases. This is why hg's and git's merging are working nicer in practice right now. In theory there is no tool that hasn't got severe problems if you put the bar up high enough. Would you possibly want sane conflict resolution when merging directories across branches (table on page 30)? Sorry, there is no open source tool that can do that, yet...
It is entirely possible that this will never happen in any reasonable time frame without re-engineering the whole system. If it can happen with relatively minor changes, it should have happened by now.
Speaking as a Subversion committer: Yes, you're right, it will still take a long time. It's very hard to make it work with a few small changes because the system contains quite a lot of layers of abstractions. We need to peel at each layer to make this work.
Each layer has a public API with some amount of compatibility guarantees. Which is both a blessing and a burden. It's a blessing for people who want to write tooling around Subversion, because they know that the tools they've written against Subversion at, say, version 1.0, will still work, without recompilation, with any subsequent 1.x release. This allowed a lot of third party tools of decent quality to be developed. No need for parsing command line output to interface with the version control tool (as was the case with CVS and AFAIK is still, today, with git). But it's a burden because it means we have to be careful not to break existing interfaces when making changes.
I wasn't around when the API compatibility guidelines were set up, and my life would be easier if they weren't there now. But the project is committed to keep them. Trying to fix things anyway is quite a challenge. It's very, very hard, and has to happen in lots of small steps, spread out over several release cycles. But it's a lot of fun, too.
We're currently rewriting the lowest layer on the client side, the working copy library. This will eventually allow us to do things like tracking local renames, so that tree conflicts involving a local rename will be solved more or less automatically. There will eventually also be improvements in other layers, e.g. client/server interface, and eventually the repository itself. Then we can start propagating rename information from the server to the client to close the loop and also handle non-local renames properly. When? Dunno. When it's done. It will take longer than many would like in any case.
If it is going to require major changes, somebody is essentially going to have to fork it and redo the core SCM storage from the ground up.
I don't see how forking would magically help with bringing about the desired changes any faster. You might just as well try to write a new and perfect centralized version control system from scratch. Or join the few people who are still actively committed to bringing Subversion forward and help us out. Subversion has already solved an awful lot of problems any centralized version control tool has to deal with. The glass is half-full when you look at it that way.
The problem is that the RT2500 chipset is proprietary, closed-source that's "maintained" by a Taiwanese manufacturer who doesn't care about his users at all and only wants to sell cheap hardware and as much of it as possible.
Well, actually, Ralink has for a long time been providing documentation to open source developers writing drivers for their devices, without requiring an NDA.
You could certainly put drivers in a higher ring than the kernel and allow them to only have limited access to memory, just as you do with a user-space application.
X.org has lots of userspace drivers that many of us use every day.
When I first heard about Linux, I had incorrectly assumed it was an evolution of Linux.
No no no, your assumption was correct!
it totally breaks all that added security you were supposed to get through virtualization
Virtualization does not add any security to the overall system. Adding more code to a system cannot make it more secure by definition. You need less code running in the system to have less bugs in the system that malware authors can exploit. Adding virtualization adds yet another attack vector for malware: attacking the hypervisor. See http://en.wikipedia.org/wiki/Blue_Pill_(malware), for example. There are good reasons for using virtualization, but improving overall system security isn't one of them.
"I've seen it. It's rubbish." -- Marvin the Paranoid Android