The problem is that this would impact the entire Internet, not just social media sites. Before Section 230, the rule was set by the Prodigy and CompuServe rulings. At the time, they were two of the biggest ISPs around and they were both sued (separately) for unwanted/indecent content on their platforms. CompuServe advertised their Internet offering as unfiltered and the court ruled that CompuServe couldn't be held responsible for what someone else posted. Prodigy, however, billed their Internet as "family friendly" and tried to filter out "bad content." A piece of content slipped by their filters, though, and they were ruled liable.
Section 230 was written because the precedents set would mean that any site or service that accepted user generated content would be liable for that content if they did any filtering at all. To use Slashdot as an example, if I posted a defamatory comment about a person on Slashdot then, under Section 230 they wouldn't be liable. However, under the Prodigy/CompuServe precedent, they would be if they did any filtering at all. This includes removing spam/scam posts, death threats, etc. So Slashdot (and every other site/service) would have to choose between no filtering at all - letting their service turn into a wasteland of spam, scam, and the worst that the Internet has to offer - or filtering their content and risking a lawsuit over any piece of content that evaded their filters.
An Internet without Section 230 would either be an Internet overrun by spam/scam/etc or an Internet without any user generated content at all (basically a glorified "online TV" service).