I worked on 3 email systems - one of the earliest UNIX systems, delivermail, XNS Email, and X.400 as a protocol inter-operability designer. Our Xerox Clearinghouse system got into distributed deadlock because it tried to overload the email storage buffers, producing a distributed deadlock (A is full of mail and email for B. B is full of mail and has mail for A. Even if they connect they cannot transfer mail because there is no space on either side. They have to wait for the mail to time out & get deleted as "undeliverable" and then they make progress for a few minutes before senders at both ends inject more emails to bring it back into distributed deadlock.) So there is a big problem if sender email transport rates approach the slowest bottleneck link in an email network. At Xerox in 1984 we were working with much smaller systems, 9600 baud transatlantic and 56kbaud norcal-socal links, 1.5MB of email buffer storage, and The Clearinghouse name servers were just sending tiny updates but they used an O(N^2) algorithm (N name servers trying to send to N other name servers, rather than using an O(n) spanning tree) and hung themselves.
Another issue is reliability. The TCP checksums are only 32-bit one's compliment sums. This works okay with super-reliable wired networks with a few tens of megabytes and maybe a few hundred, occasionally, and I don't have time to do the math but as you scale into hundreds of megabytes and gigabytes, the checksums will fail. The checksums will pass corrupted email. You will get into the classic "End to End Argument in System Design" which means only the endpoints know how strong to make the checksum. You'll need a more powerful end-to-end checksum to achieve reliable delivery. If you try to strengthen SMTP email then 90% of mail users will pay an unacceptable price. Don't laugh; checksum failure has caused OSPF routing outages before, breaking internet routing.