Maybe it made sense once. But reading TFA, what convinced me that MUMPS is really BOLLOCKS was this quote:
For one thing, as a programmer, I can take an item stored in one of those globals and give it "children," which might be some additional properties of that item. So, we wind up with lists of different things that can be described and added to in different ways on the fly.
Hmm. That sounds almost like you're tracking relationships. Maybe you should use... (wait for it) A RELATIONAL DATABASE. Seriously, we often store object databases in relational databases. It's easy to add more properties to objects in your database with a relational db because of its very nature. You just create a new relationship, appropriately keyed. And there are lots of examples of systems backed by relational databases which permit you to add arbitrary new properties to objects. Take Drupal, for example; you can always either add a new module which will add new properties to old node types, or just add more data types to old node types. You could add, for example, a parent-child relationship. In fact, modules exist to do this already.
Maybe there is something about MUMPS which makes sense, but if there is, it wasn't articulated in this article. I tunneled down to the MUMPS/II page and found this:
1. Hierarchical database facility. Mumps data sets are not only organized along traditional sequential and direct access methods, but also as trees whose data nodes can addressed as path descriptions in a manner which is easy for a novice programmer to master in a relatively short time;
2. Flexible and powerful string manipulation facilities. Mumps built-in string manipulation operators and functions provide programmers with access to efficient means to accomplish complex string manipulation and pattern matching operations.
So basically, nothing you can't have in perl today, with a relational database, and a table or two to track relationships between objects. But instead, it's a whole new opportunity to create problems! MUMPS is a great name for it.
The challenge isn't that you can't do the same thing with a newer type of database but converting all that data into the new one. That is expensive, time consuming and invariably winds up with the loss of 20% or so of the data. My general rule is 2-2-20 Costs twice as much as planned, takes twice as long as planned and you lose or screw up 20% of the data.