The "packets of 576 bytes can't be fragmented" is a commonly stated reason, but it is wrong. It is a myth/misunderstanding. It is, in practice, true has has been true since probably the late 1980s, but DNS was around long before that. Indeed, if you read some of the earlier RFCs, it is quite clear that packets of any size could be fragmented, down to something like 16 bytes of payload per fragment.
No,the reason for the 512 byte payload size is much more basic than that. Back in the early 80s, memory was tight, you could have mainframes supporting dozens of users on a machine with maybe 1MB of memory, each of user could have more than one active network connection. IP supports packets sizes up to around 64k, but it would be unreasonable to expect every host to be able to accept such a large packet size. It would mean that they could get fragments from all those packets piecemeal and out of order, so reconstructing each packet would require holding lots of 64k buffers, each of those buffers would be 6% of all available memory.
It would be very unreasonable to expect every host on the internet to be able to accept any size packet, even if those packets came in fragment that wouldn't saturate your connection. Now, protocols like TCP have the ability to negotiate the packet size, but for UDP, it gets messy and slow. So, it is a *requirement* that each host on the internet can accept a packet with 512 bytes of payload. That packet can be fragmented, but it has to be accepted.