I used to manage Digital UNIX (later called Tru64) systems for a large, now bankrupt, telecom back around the turn of the millennium. The filesystem used, AdvFS, was pretty cool and advanced for the time but under the version of the OS we were running we found that free space would shrink at a faster rate than used space would grow. I had filesystems report full even though a df would show only 60% used.
It turned out that when small files were deleted all of the space wouldn't become free. My customer wrote thousands upon thousands of 150-200 byte files a day and deleted just as many. The entire team and my customer agreed this was clearly a bug.
When brought up with Compaq (who had recently aquired Digital) the technical rep investigated and reported "this is not a bug, the code is being executed exactly how it's written." Seriously, this was his response. I would have been more amused if he seriously argued it was a "feature."
I never could get a definition of what a "bug" really was from him. I became rather infuriated when he reported to me that this issue was "fixed" in the latest major release of the OS. If there was no bug, why was it fixed?
I never got a straight answer and was left on my own to find my own work-around which involved inserting a new volume into the filesystem thus growing it and then deleting an old volume. When this was done to all volumes in the filesystem, the problem was resolved for a few more months. This was an incredibly labor intensive and, as far as I'm concerned, incredibly risky to move data around like that on a hot system with insane uptime requirements. There was also a massive performance hit while this was happening and my customer's application was already VERY IO intensive.
I'm still just as angry about that conversation with the rep today as I was back then.