Given that the search engines (or at least Google), and the websites themselves, were much better in that era
Google has always been a crawler, not a directory. Its crawl was seeded at times with data from Dmoz Open Directory Project, a directory that Netscape acquired in 1998 and ran as an open database to compete with Yahoo.
I'm not sure what the downside would be.
One downside of the directory model is that the operator of a newly established website may not know what search engines its prospective audience are using.
A second downside is time cost of navigating the red tape of keeping the site's listing updated everywhere. This has included finding where in each directory's detailed categorization a site belongs and understanding what each directory expects in each field of the submission form so as to avoid a binding rejection that can delay a site's addition for months. It has also included solving CAPTCHAs meant to deter spammers from overloading a search engine with low-quality sites. In fact, the first CAPTCHA I ever saw was on AltaVista's submission circa early 2000, and it surprised me enough to try opening the site in Lynx to grab a screenshot and complain about its inaccessibility on Slashdot. It took until fourth quarter 2003 for inaccessibility of CAPTCHA to be recognized by a notable organization.
A third is monetary cost to submit a listing. At one point, some search engines were start charging a fee to crawl a site. In particular, Overture's GoTo.com pretended to be a search engine but was 100 percent pay-for-click ads. (Imagine if Google had a search engine just for AdWords listings, and that were the default.)