Private companies do science [slashdot.org] all the time [nytimes.com] because they need [bit-tech.net] to push their knowledge forward to stay competitive [cisco.com].
You're missing the rather large distinction between basic and applied research. Most companies do science with the explicit goal of advancing products to the market, and are very reluctant to spend time and money on anything that doesn't have a clear route to commercialization. (I'm not saying this as a put-down: their job is to make money, not publish journal articles.) But most basic research, at least in the field of biomedicine, can take decades before commercialization is feasible, if that ever happens - and there's no way to know in advance whether it will or not.
My favorite example is X-ray crystallography, which pharmaceutical companies use to study the molecular interactions between proteins and drug candidates. The first experiments were performed in 1937, the first atomic-resolution structures were published in 1961, and I believe the first application to drug design was sometime in the 1980s. It's not like those lazy academics were just sitting on their hands all this time; it took them decades just to work out the math involved, and there were multiple Nobel prizes awarded in the process. Now academics and companies solve thousands of crystal structures every year, but it still took the rest of the 1980s and 1990s for the technology to develop enough to support that pace.
There are actually a handful of companies that are so profitable (or so large) that they can subsidize undirected basic research: IBM is one, also Genentech, Novartis, and arguably Google and Microsoft. And smaller companies will publish bits and pieces of their directed research as well, if the lawyers let them. But for most, they can't spend decades developing a theory; their shareholders would never stand for it.