There must be some way to solve a problem like that, where you have a series of pointers to files, if not the files themselves as well, with the ability to add markers of some kind to each of those pointers. (maybe we can call them, "Records!!!" like CD's used to be called) And then! Then! We can disguise how the management of these 'records' are organized from the user, so they don't have to think about it. And give them a simple, logical way to get data about those 'records' out of the big, organized whole. It'd be, like, a whole new basic way to store our records! We could easily find what we wanted in our basic data storage. I can't believe noone's thought of it before. ;)
My point here isn't that you should use a database to store your data about your files, (unfortunately, a unified markup system for files doesn't exist yet; it would be nice, but all that stuff is in the OS right now) my point is that the author of the article is missing that even if in-memory data systems do become extremely large, the underlying theory of the technology will not change much.
And the underlying theory relies heavily on caching, limiting how much of your overall dataset is currently relevant, and so on. While I will admit it's possible many databases' useful data size will eventually be outgrown by RAM-style memory storage, when that happens market forces will probably make it comparatively expensive to hold all your data in memory at once. Partially because clean, concise code is generally far more expensive to produce than sloppy crap that chews through your data storage.