It seems (still) potentially very useful, and the federation stuff seems like a bigger deal that it might initially sound like (instead of needing one person or organization to provide a huge server and mirrors for a big collection of media and user accounts, smaller groups and individuals can "federate" more manageably-sized small server instances that they each run). Also, native pump.io (which is more or less a very extensible "microblogging" standard if I understand right) support ought to mean you won't need a special "mediagoblin" client to use it outside of the web interface, you'll be able to use whatever general pump.io client software you might already be using on other services at the same time (again, assuming I understood that right).
It's one backend that handles a whole lot of different kinds of "media", so you don't need to install a "photo gallery" and a "video server" and a "document server" and so on separately. It takes whatever supported variety of media you give it and converts it to a "web-friendly" open format as needed. As their wiki currently shows: "In the future, there will be all sorts of media types you can enable, but in the meanwhile there are six additional media types: video, audio, raw image, ascii art, STL/3d models, PDF and Document." (Last I heard, it additionally supports a "blog post" sort of type i.e. HTML text. If MediaGoblin takes off I suspect someone would get around to adding
I'd probably be more familiar with it except of the two media types I could potentially get a lot of use out of it for myself, photos/still images seem to be very well supported but I've already got a much-easier-to-install piwigo instance running for those, and audio support is kind of a kludgy mess at the moment. MediaGoblin would otherwise likely be a great (nigh-ideal, even) system for building a sound-effects library and/or podcast-hosting.
To support audio, you have to install scipy and one or two other modules as I recall (in addition to the rest of the python stuff MediaGoblin needs), though it has nothing to do with the actual audio - from what I remember of what I could glean from trying to poke around in the source (disclaimer, I am NOT very experienced at all at python or even "object-oriented" programming in general) every bit of uploaded audio is currently transcoded twice - once to ogg vorbis, which is only used to generate the still-image "thumbnail" graphic in the form of a spectrogram (that's what scipy et al is for) rather than e.g. extracting "cover art" from the metadata or generating a simple image via gd or something. Then that's discarded and the audio is re-transcoded to "webm audio" rather than
I wish I had a better grasp of python - I know gstreamer has (undocumented?) support for reading and writing media metadata tags, if I knew what I was doing I'd try to come up with some patches for the audio thumbnail/tags support, but since I can't even figure out where one would go in the sourcecode to change the output format (to