Ok, I read all the other "This is stupid" comments and my jaw kept dropping. I actually felt this was an April fools thing or something similar and that we were all missing something somewhere (and please let me know if I am... I REALLY need to know). I HAD to read the article and underlying paper, cause I just couldn't believe the absolute asinine stupidity of the test, let alone that it was being presented as research, or that the test itself was so flawed! So after all that, had to post. Summary for others, adding my voice to the crowd.
Assumption: Software Developers avoid disk access cause they believe doing it in memory is faster. This is put in context of BI and bigdata.
Testing: Create a program representing a common task that can be tested where one uses memory and the other uses diskspace.
1) Create a string in memory.
2) Add it multiple times into another string
3) Write second string onto Disk
4) Flush writes
1) Create a string in memory
2) Write it multiple times to Disk
3) Flush writes
Create code in Python and Java.
Conclusion: Memory Test is so much slower than Disk Test! Additionally, the languages used have certain quirks to make it worse. Optimization helped a little but only on Linux. Therefore, programmers should reassess and understand their OS and programming languages before assuming this belief which is not true.
Assumption & Testing idea... very good. I would have loved to know the unknown scenarios where this assumption should be questioned. Especially in the world of click&drag programming for workflows, ETLs, and report writing.
But from there... its all BS and stupidity. Basically the test tests if replicating the hard drive driver in memory and then using the driver to write to disk is faster than just using the driver to write to disk. Are you bloody serious?!?! That's like testing if 2+2 is greater than 2+0. And that is before we start looking at using Java and Python which do a ton of work in terms of memory management and build all types of stuff around data types. Before the fact that they wrote the Python code WRONG (that's the slow way of doing string or listing concat). So they picked languages that write in memory O(n) extra times for the same data.
This test would have come to the same conclusions in C, C++, or Assembly! But the folks wouldn't have been able to write code to see the micro second time differences.
So lets set the record straight. NO developer out there goes out of their way to just write to a memory file if its simply going to flush to disk. Its not worth the extra lines of code, nor the lost CPU cycles in reading them. Especially since most operating systems do this already at multiple points along the data chain at the very low hardware & driver levels! If we have developers like this, we have a ton of bigger problems in software development than this little thing that will be solved by money.
To test this belief properly, give me a scenario where you reuse the written to disk/memory stuff, transform it, and then write to disk. See which one is slower. If its written properly, you will see that the underlying hardware systems will actually store stuff in cache or memory for you to help you speed it up! If you find proper scenarios where the memory part is slower, please let us know cause that is actually adding to the IT body of knowledge.
God, as this was BigData related, I was hoping at least something along the lines of "In DB data processing and extract vs extract and client side processing". Give me the points along a curve where one is better/worse than the other. THAT would have been interesting.