The problem being encountered is one I've faced often in 30 years of weather forecasting: Ambiguity Management.
The weather business deals with reams of data from thousands of sources and all the complexity of trying to follow a single swirl within a flowing river to figure out where it will be tomorrow. Decades of research and modeling have evolved into dozens of primary rule-based tools available to forecasters which are applicable to most situations. Objectively, you should be able to follow the rules, weed out the conflicting or contradictory ones, and get a reliable answer. Realistically, you don't. Why? Two reasons:
1. The dataset is incomplete.
2. The tools are imperfect.
You simply can't have perfect knowledge of all the relevant details in the atmosphere to feed a completely objective tool (computerized model or whatever) to get your perfect prediction. Like Rosanne Rosannadana's mother said, "It's Always Something!"
The trick then in being a good (aka reliable) weather forecaster then is how you manage the ambiguity of incomplete data filtered through inherently biased tools. Some weather stations run hot or cold, have local effects enhancing or reducing pressure or winds, etc, etc, etc. Good models account for this, but that's a static adjustment, not a dynamic one. Models run hot or cold, fast or slow, depending on their structure and assumptions, and they reval their strengths and weakness over time compared to other models and reality at verification time.
The basic forecasting questions are - Where is it, Where is it going, an what will happen when it gets there? Because the models are perfect (100% replication of output from identical starting states), but are always wrong (inherent model and data limitations), you make your money examining the consistency. The model(s) are running slow and cold recently due to the whatever event going on? Ok -- warm it up a few degrees and expecting things a few hours earlier than it forecasts tomorrow. Some models handle well in winter but get klutzy with large thunderstorm events. One model I worked with covered the world in clouds if you waited long enough. Solution? Don't trust it past X number of hours. And so on for the family of models through the decades and to today. Some models have high skill up to a certain point then it drops off quickly. Others show less skill, but are decent for the long haul. You get the idea. You can make a forecast using only one tool, but you can make a better one using several and sorting out their differences by using ambiguity management.
Needless to say, you needed a solid understanding of the physics and dynamics of the atmosphere to help make good decisions to do all this effectively. The modelers and users now data mining these huge collections of information likewise need a solid understanding of Statistics and the event mechanics they're examining to make any good sense of it all. At the very minimum, a large poster announcing "Coincidence is not Causation" needs to be in every office, otherwise you start getting breathless announcements about how underarm deodorant "causes" cancer because people eating hamburgers had a lower incidence rate by comparison.
Your Mileage May Vary -- a lot. That's the point.