The problem about this is that many of those types of flaws have been well known about and well publicised for many years now (and many high profile sites have had widely publicised exploits ecause of them).
However, there are now many standard practices which seasoned/experienced programmers/developers/system designers use to mitigate most of those issues (Hell, whilst I may have some issues with Ruby on Rails, with the current release I believe you'd have to explicitly allow unescaped HTML into your pages).
Anyone who has been developing any web applications for any decent length of time should be treating security (XSS, SQL Injection, Request Forgery etc) as a matter of principle, because it's much harder to retrofit security once you're finished. So that their source has so many holes in it does not bode well for any underlying protocol, they are not approaching the project with security in mind at all (and it may seem that they are not experienced enough yet to approach it so). This would be fine if it was just your average open source project, however it's not. They have been donated some $200,000 with which to develop it, and the benefit that could be gained from it is immeasurable. If the code they write is full of flaws, you can probably expect the protocol to have issues as well.
As has been suggested, the very first thing they should have done is come up with the protocol/data schema/api with which the sites would communicate . This would include allowing extensions/non base data as if there isn't a standard way of doing this then many of the various companies who run the servers will attempt to extend them (ala Microsoft) to get their own kind of vendor lock in (The best way would probably be something similar to the RSS v2.0 modules via namespaces, though I haven't spent too much time thinking about it).