Updates are often expensive and disruptive to an organization. The security expert may not care because it's "somebody else's problem". (I suppose this works both ways.)
Software often depends on multiple layers. Updating one layer often breaks another. Typical steps involve:
1. Keep an eye out for updates
2. Read up on any changes
3. Create a test stack or station to test an update in your org's environment and/or with the other layers.
4. Fix or devise work-arounds for any problems caused by the update found by the testing
5. Schedule the update deployment
6. Prepare a contingency or roll-back plan if there are problems
7. Coordinate and announce down-time during deployment
8. Test production after deployment
9. Educate users of changes
10. Answer questions and/or study new problems or user confusion over new features/behavior.
That's not only labor intensive, but if something goes wrong, managers often ask, "If ain't broke, why did you fix it?"
You can then reply that it reduces security risks to be up-to-date, but the managers or owners often view it as a concrete expenditure and disruption weighed against a fairly unlikely hypothetical, i.e. "being hacked". They are going to want solid evidence of breach probabilities to weigh against the costs of update labor & headaches, which are here-and-now costs and user disruption.
You can't just say, "updates are good for you, like broccoli". The suits often see it as make-work job security games. Better and presentable evidence is needed.