Yes, these are the C++ answers which I expected and they demonstrate the problem: You can only have static arrays which can live on the stack or dynamic arrays which then must live on the heap. My real world requirement (this is not a synthetic example) is to have dynamic arrays on the stack, because allocation overhead would be huge if you have to allocate temporary arrays each time you process a voxel of 3D volume. Of course, you could work around this ans pre-allocate the temporary memory outside of the loop, but this would unnecessarily complicate the design, especially when you then parallelize the loop. That C++ has no good way of doing this is simply a deficiency of the language.
I don't think that putting this on the stack is "fundamentally wrong". First of all, it is 2015: 424 bytes are not much at all. You can have Gigabytes of stack and it does not matter. On all modern systems with a MMU, it is virtual memory, so if you don't use it it does not cost you anything. With multi-threading on 32 bit there is the risk that you can run out of address space (some compilers support split stacks for this reason). On 64 bit this issue simple does not exist any more. There is no reason whatsoever to not use the stack as much as you can.
But even if you have bounded stack space for some reason (e.g. an embedded system without MMU), this not even a valid argument against VLAs at all. Dynamic arrays are not necessarily bigger than static arrays, or other stuff people are putting on the stack. In fact, the opposite is true: If you need to put things on the stack (see above) and do not have dynamic arrays, you would have to put an array of the maximally possible size on the stack. This is clearly worse.
You have a minor point that heap allocation might give you a useful error while stack overrun might corrupt some other part
of the memory. I do not think this is really true in practice though. For the following reasons:
- Usually with memory overcommitment you would not get a allocation error even on the heap, but a SEGFAULT when accessing a new page which cannot be allocated. To get an actual memory allocation error you basically have to limit your heap size.
- With a stack overrun you would also get a SEGFAULT which you can catch. You are right that there is the possibility to jump over the guard area with a large allocation, but remember, the guard area is virtual memory and on 64bit you can make it very big at no cost at all. Finally, if you really need to close this loophole, you could use a compiler which inserts run-time checks. So this is not a language-level problem at all.