I have lived on both sides of the fence, having written code for 35 years having failed second quarter calculus and thus the entire university program. After 35 years, I took a break from work, went back to school, passed 4 semesters of calculus and many other courses, and came out with a BS in Physics at an amusingly advanced age.
In the course of my first-segment career, there was were a couple of projects where I had to rely upon someone with greater math background and an expensive MATLAB DSP application package.
The first project was a low-jitter timing subsystem, where a device needed to synchronize to timing derived from a signal from a master device. I coded it, then used a combination of fragmentary digital filter experience that I'd picked up on a couple of silly personal projects over the years, and partially informed intuition, to run the filter algorithm at two different rates: a high (but CPU-expensive) rate for acquisition (where the abstract design failed), and a lower rate, exactly according to the abstract design, for tracking.
The background that I received from 4 semesters of calculus (that included a skeletal introduction to differential equations) might have helped here, but it would not have been sufficient.
The second project was a digital payload processor / modulator. I initiated and functionally specified much of the project (which mostly was an implementation of certain published standards), but it required contributions from 3 PhD researchers with evolving expertise in CIC filter design, to make the implementation feasible in a modern FPGA with integrated multiplier/accumulator blocks.
My contribution consisted mostly of generic software, but there was one area where I had to quickly try to teach myself some mathematics, and it was mathematics that would not be taught in undergrad physics, and most likely is not universally taught in other than cursory treatment, in undergrad comp. sci. programs: finite field arithmetic (error control coding). The objective here was to examine whether a legacy implementation whose design rationale had not been adequately documented, was truly equivalent to a published standard (it was, but I discovered a subtle gotcha), and to validate a proposed implementation against the standard.
What arguably has been most valuable after emerging from even a late and rudimentary education in a scientific discipline, back into software work, is having been exposed and held to a scientific standard of rigor. This stands in devising a sufficient and feasible scheme of measurement, in collecting sufficient data, in formulating claims that are supported by data rather than mere belief, and in making clear when the line between data and belief must be drawn and crossed. At times it turns me into an organizationally inconvenient holy terror. It also allows me to deeply examine and locate defects that are potential product and reputation killers, to endure hypothesis-destroying experiments, and to emerge with a clear understanding of the nature of the defect, and how to cure it.
I do wish that I'd had time and energy for more math: a practical course in statistics and design of experiments, preferably one designed to teach students who are well along in some field of study, rather than as an early weed-out for weak students in oversubscribed majors. The introductory statistics course at my university, which mercifully is not required in the physics major, is of the latter type and is generally reviled, even by the capable.