Mathematics predates the scientific method, so mathematics can't be dependent on the scientific method for discovery.
Even for those aspects of computer science that actually could apply the scientific method seem to mostly dismiss it.
Academic papers usually read more like an essay than a scientific study. They spend time trying to captivate your attention with a problem, come up with a solution that works under some set of constraints, downplay the significance of those constraints, and then spend a lot of time showing you the solution and how well it works under their contrived scenarios.
They spend no time trying to construct experiments that will disprove their hypothesis (usually you can't even call it a hypothesis), and if they do find bad cases they call them "degenerate" cases and downplay those, too, or maybe add something to the list of constraints under which the solution works.
I would like to pose this challenge: pick a few academic papers; identify the hypothesis; identify the experiment that tries to disprove the hypothesis; and show a clear indication in the paper whether the experiment disproved the hypothesis, was consistent with the hypothesis, or was inconclusive (i.e. experiment not good enough).
Take the C-Store paper, for instance:
"We present preliminary performance data on a subset of TPC-H and show that the system we are building, C-Store, is substantially faster than popular commercial products."
The paper has been influential, and the argument convincing. I even like the paper and find it insightful. But it looks more like they had an idea, tried it out, and published as soon as the numbers were good enough. I don't see much effort to control variables at all.