Ram capacity for the retail AMD chips like the 3900X or 3950X (really any Zen 2 AM4 chip), and both Zen+ and Zen 2 threadripper chips, is limited by the fact that there is only support for unbuffered DIMMs.
People trying to compare Xeons against TRs based on wanting to stuff a terrabyte or more of ram wouldn't be buying a TR for that, they would be buying EPYCs, so its a stupid comparison. Its also stupid because Intel charges thousands of dollars more for Xeons with large physical address spaces (even with recent price cuts, Intel gouges buyers just for wanting more addressable ram even if they don't need the cores).
I have personally stuffed 128GB into a 3900X (AM4 socket) using 4 x 32GB ECC UDIMMs, and 256GB into a 2990WX (threadripper socket) using 8x of the same type of memory. I can run the memory at up to around 2666.
Insofar as I know, one can use 64GB DIMMs in both situations (256GB on AM4 and 512GB on TR), and I think 128GB DIMMs can be used on the TR. But since they are unbuffered, they would have to at low frequencies (1066 MHz for a 2133 MHz data rate). But the biggest DIMMs I personally own are 32GB each so I can't test higher capacities.
AM4 and TR Motherboard vendors do not generally validate for high-capacity memory, which is why they list lower capacities, but I they all support high-capacity memory just fine.
Very few people need that much memory even on a threadripper. We need it for bulk compiles... around 2GB per cpu thread, so we need around 128GB of ram with a 32-core/64-thread threadripper and 256GB of ram with a 64-core/128-thread threadripper (the 3990X releases on February 7th). Most other (likely) workloads do not need that amount of memory though, particularly when one can get NVMe storage devices with 5GByte/sec bandwidths.
The EPYC chips support 2TB per cpu socket (4TB total for dual-socket EPYCs), using registered DIMMs.
--
The bigger deal with the threadrippers is the massive PCIe bandwidth. Not only do you get 128 PCIe lanes (actually more when you include the chipset), but the Zen 2 I/O hub built into the cpu chip has over 400 GBytes/sec of peer-to-peer bandwidth. Intel chips clock in at more around 100 GBytes/sec (or less).
DRAM bandwidth is roughly the same for both vendors, but AMD cleans Intel's clock out on peer-to-peer PCIe bandwidth and this is quickly going to become important in the commercial space.
-Matt