Example 1 (running out of memory): you have your OS on 32 bit hardware. That's 4GB of address space. Your program allocates 333MB, then 666MB, does that 4 times. Frees all 4 333MB blocks. That's 1.2GB. Asks for, say 400MB - not possible! Why? Because, in your address space, there's no block that big. You have free blocks at 0 to 333MB, 1000-1333MB etc. Virtual memory suddenly became "physical". Consequence: out of memory when there's memory.
Example 2 (having big process working set although actual heap usage is small): you allocate small(ish) things on the heap. While doing that, your allocator asks for memory from the OS, but only from time to time, as it's expensive and granularity is lower (OS gives out memory "pages"). Imagine that for 100 smallish things you end up with 1 actual hop to the OS. Now, imagine you allocate 1000 things, with 10 hops to the OS (you get 10 pages). Now, you deallocate thing 1 to 99, thing 101 to 199 etc. Although you are using only 10 smallish things, your OS reserves memory for you as if you still have 1000. Only when you free thing 0, you can give your first block (page) back to OS. Consequence: a lot of swapping for little memory used (because OS knows only of it's own blocks (pages), it swaps per these blocks).
Conclusion: depending on memory allocation profile of the program, fragmentation can become a big problem. More so if consumption goes closer to available address space.
Adapt. Enjoy. Survive.