Just installed Calibre so I'll see how it works in regards to the number of files I have (380k+).
I think that might be quite taxing for calibre, in particular the import process *will* take a good many hours. I just tested importing 3930 books (novel length), it took about half an hour, so it seems that you look at at least two days of unsupervised imports. Then again, doing *anything* with 380k files is bound to take time :)
Look at partitioning your library into several libraries if you have logical ways of dividing your collection. calibre also supports something called virtual libraries which I haven't used myself, but it might speed up handling a very large library. As mentioned there are huge performance improvements in the last versions of calibre, but you will surely benefit from an SSD disk and a largish amount of ram in any case.
I hope you have reasonably good metadata, either in the files themselves or in your naming structure, or you will probably face insurmountable problems tidying up your collection afterwards (this is not particular to calibre - GIGO applies here as everywhere else). Reading metadata from path info will probably be faster, as calibre won't have to parse the file on import. Check out the "Control the adding of books" in the Add books dropdown menu, in particular the regular expressions for parsing paths and file names, in that case disable file parsing. Do a few test runs on small subsets to make sure that calibre catches at least author and title correctly. Do your imports in batches (you can use the tag on import feature to connect a book to a particular batch), verify that metadata is sensible as you go along. Some things, like correcting different variations of author names, can be done efficiently after import (if your library is at all usable).
I (and likely others) would very much like to hear about your experiences, feel free to make a thread on the calibre subforum of mobileread. Note that such a large library might seem suspicious to some users, as pirating is frowned upon. In any case, the devs are tuning the performance of the new and more efficient db code (one of the new features in the 1.0 release), and your library will be a good test subject :)