The real story here, is that if you don't know how to write code properly, then string concatenation can be really slow.
Was their paper peer reviewed?
I just reviewed it, but frankly, they're not my peers.
They actually understand the problem and state it near the end of the paper. The issue is pretty simple and when I read the /. summary I knew what the problem was. They're appending single bytes to a string. In both chosen languages - Java and Python - strings are immutable so the "concatenation" is way the hell more complex than simply sticking a byte in a memory location. What it involves is creating a new string object to hold both strings together. So, there's the overhead of object creation, memory copying, etc. Yes, by the time you're done it's a lot of extra work for the CPU.
I'm going to state this as nicely as I can: what they proved is that a complete moron can write code so stupidly that a modern CPU and RAM access can be slowed down to the extent that even disk access is faster. That's it.
Even if you wrote this in C in the style in which they did it the program would be slow. Since there's no way to "extend" a C string, it would require determining the length of the current string (which involves scanning the string for a null byte), malloc'ing a new buffer with one more byte, copying the old string and then adding the new character and new null byte. Scanning and copying are both going to require an operation for each byte (yeah, it could be optimized to take advantage of the computer's word length) on each iteration, with that byte count growing by "1" each time.
The sum of all integers up to N is N(N+1)/2. If N is 1,000,000 the sum is 500,000,500,000. So, counting bytes (looking for null) requires half a trillion operations and copying bytes requires another half trillion operations. Note that "operations" is multiple machine instructions for purposes of this discussion.
Yeah, modern computers are fast, but when you start throwing around a trillion operations it's going to take some time.
Writing to disk will be faster for a number of reasons, mainly because the OS is going to buffer the writes (and know the length of the buffer) and handle it much much better. It's not doing a disk operation every time they do a write. If they were to flush to disk every time they would still be waiting for it to finish.
There are a few notes, here. First, in Java and Python the string object likely holds a "length" value along with the actual character buffer. That would make it faster and not require all the operations the badly written C code that I describe above would require. But the overhead of objects, JVM, interpreter, etc. gets thrown into the mix. Second, if I were doing something like this in C I could keep the string length as part of a struct and at least make it that much faster. The point is that a good programmer wouldn't write code in this manner.
Anyway, this "paper" proves nothing except that really bad code will always suck. One would have to be an idiot to write anything close to what they've done here in a real-life scenario. I know because I've cleaned up other people's code that's on the level of this junk...