In general writing to RAM is faster than writing to the disk. However there are things that get in the way of both.
1. OS Memory Management: So you making a small memory string to a big one. So will the os fragment the string, when it comes up to an other systems reserved memory spot. Will it overwrite it (Buffer overflow), will it find a contiguous larger memory block and copy the data there. Will it copy and move the memory slots to a new location away from the memory. Will this be happening preemptively, or when the error condition occurs, will all this stuff happen with a cpu cycle that is not sharing with your app. Also if you are low on memory the system may dump it to the disk anyways.
2. OS Disk management: A lot of the same concerns that memory management has. However a bunch of small request is easier to find free space, then asking for a larger spot. So they may be more seek time.
3. Disk Caching: You tell the program to append to the disk. The OS sends the data to the drive, the drive responds back Yea I got it. then the OS goes back to handling your app, in the mean time your drive is actually spinning to save the data on the disk.
4. How your compiler handles the memory. Data = Data + "STRING" vs. Data+="STRING" vs Data.Append("STRING") vs { DataVal2=malloc(6); DataVal2="STRING"; DataRec->Next = *DataVal2; } You could be spending O(n) time saving your memory where you can be doing in in O(1)
Now sometime I do change my algorithm to write to the disk vs. handling it in memory. Mostly because the data I am processing is huge, and I much rather sacrifice speed, in order to insure that the data gets written.