TL;DR: Not really.
I'm guessing that's more of an "asset management system". Ours was orientated around the video. As cameras roll, we digitised the footage by tapping into the tape deck monitor output, we had RFID tags on each tape, and we had LTC/VITC timecode from the deck. We therefore had a unique reference for every frame laid down, as it was laid down (ie: there was zero ingest time, which was - and still is to a large extent - an issue with asset management systems).
The system then sent each frame to a centralised database server that had a webserver on it, and I wrote a streaming (ok, this part was in C :) server and a streaming player for Linux, Mac, and Windows that understood our custom streaming format. There wasn't anything complicated about the format, it was basically motion-JPEG data served from an HTTP interface, so the player would send the URL "http://asset-server/tape-rdid/timecode-from/timecode-to" and get an application/octet-stream back which was each file (common headers stripped), where a file was an individual frame in JPEG form.
What this let people do was record out in the desert, and have their digital dailies sent back via a satellite upload to home base via rsync, and the team at home base could "see" (we only supported quarter-res images at the time, the internet wasn't as fast as it is now) the footage, reliably locate frames on tapes, and discuss/annotate/create EDL (edit display-list, basically a set of timecode-timecode ranges) sequences and play around with it as if they had the tapes right there, even if it was at a low resolution.
On a more prosaic all-in-house system, the act of using a Discreet Inferno or Flame system (which controlled the tape decks in a post-production suite) would automatically log footage into our system, so the non-artist types could use our "virtual VTR" system to review and create play-lists which could then be sent to the machine room with the certainty that what they'd composed in their web-browser would be what ended up on the tape that would later be delivered to clients. This freed up a lot of the tape-deck use which could then be put to more profitable use by the post-house.
There was at least one time when I got a angry phone call from a client who claimed our system was screwing things up. They'd created their EDL for the client using our system and then sent the job to the tape room to be generated, and of course creating that new tape would automatically log the new footage into the system (because it was writing to a tape in a monitored tape deck). They looked at the output footage of the generated tape in their browser, and it wasn't right. After a bit of tracking things down, it turned out the tape room had inserted the wrong master tape, so we saved them the indignity/embarrassment of sending footage from a *competing* client out the door. That alone, in the eyes of the director, was worth the cost of the system.
We had similar procedures for rendered footage from 3D systems (Shake etc. at the time). Again, everything was collated into shots/scenes etc. on the database server. We had rules that would be applied to directories full of frames that would parse out sequences from arbitrary filenames that were differentiated only by a frame number in the filename. That's actually harder than it looks - there is *no* standard naming convention across post-houses :) I separated out the code into a library, wrote a small commandline utility called 'seqls' which was *very* popular for parsing out a directory of 10,000 files into a string like 'shot-id.capture.1-10000.tiff' ...
All of this is (I'm sure, I haven't kept up to date) commonplace today, but it was pretty revolutionary at the time. I'd say about 90% of the code was PHP, there were various system daemons in C, there were video players for the major platforms in C/C++ and there was a kernel driver for the linux box in C that handled the incoming video, digitised the audio, and digitised the LTC timecode (the audio timecode, the VITC timecode was on line 27 (?) of the video signal on every frame, and we decoded that as well in the kernel driver).
On top of that, I designed an audio circuit to take in the stereo balanced audio and de-balance it to a standard audio signal we could encode along with the video frames. More recently, and purely for fun, I re-designed the entire thing to be on a single board with an FPGA doing most of the work. One big-ass server and lots of cheap digitisers, it's amazing what you can do these days :)
That's a whistle-stop tour - I could go on and on, but the wife is calling me to do the dishes while she bathes the kid :)