Forecasting is only one part of the whole stock assessment cycle. It gets the most attention in the press, but there are other steps along the way. Although I agree that forecasts of this nature can be very uncertain they are part of the preseason planning process (i.e. commercial, First Nations and sport fisheries; conservation concerns, etc.). As Sockeye stocks begin to show up, test fisheries, catch monitoring and stock assessment crews on the spawning grounds provide data for inseason management to adjust forecasted abundances and fisheries if required. Following the inseason stock assessment, there is post season evaluations where total run sizes are determined. This includes harvest rates and escapement to the spawning grounds.
Most models are over-parameterized and inaccurate? First, you need to understand there are groups of two groups of forecasting models – non-parametric and parametric. Second, you need to understand the differences between non-parametric vs. parametric models for forecasting before a discussion can be started on what is inaccurate. Non-parametric models forecast future returns based on historical time series (i.e. cycle year averages). There is no biological basis for this group of models. For instance, with miscellaneous stocks (i.e. Early Shuswap stocks), recruitment data is often very limited or non-existent; thus, you can only use non-parametric models. On the other hand, parametric models (or biological models) incorporate stock-recruitment data; thus, require parameter estimation. These biological models include: Ricker, Larkin and Power. Chilko, Upper Pitt and Adams are examples of stocks that would use biological models. These models can also incorporate environmental information. In the 2010 forecast, 11 non-parametric models and 3 biological models where used – not hundreds. The can be variations to a particular model (i.e. Ricker), but it is basically the same group of model.
The people doing these forecasts are not "getting out of control with differential equations in a hurry". The process to obtain these forecasts goes through a rigorous and lengthy set of events (one of them being the Pacific Regional Science Advisory Process) before they are released to fisheries managers and the general public. These processes involve non-DFO individuals also. The recommendations from this process can influence the final forecast report, so the actual forecasters are not the only people involved.
If you read the 2010 forecast you will see that they attempt to chose the best models that perform the best. Models are rated based on their historical performance predicting abundance of certain stocks (called retrospective analysis). Some models perform better than others; however, one particular model that is good for one stock may not be good for another. Forecasters fully realize that the performances of models are based on the assumptions underlying them. Without question variability in survival rates in recent years and our knowledge of the first month of saltwater life of juvenile salmonids have made forecasting very challenging. Forecasts are commonly reported in the press and for fisheries management at the 50% probability, but in reality forecasts are a range of probabilities from 10% to 90% and should be reported as such. The reason is simple – you cannot put a single value to such uncertainty. It misleads the public and does not fully appreciate the variability involved. What the hell does 25% or whatever percent probability mean? Well, for example, there would be a one in four chance at the 25% probability level that the actual number of returning Sockeye will be at or below value given the assumption of future survival.
It should be noted that 2010 forecast attempted to address this uncertainty by assessing 3 different scenarios: Long Term Average Productivity; Recent Productivity; and Productivity Equivalent to the 2005 Brood Year. Due to the highly uncertain survival rates in recent years and what happened in 2009, “Recent Productivity” was chosen as a more conservative approach. However, in 2010, Fraser Sockeye did completely the opposite. If the Long-Term approach (which has been used before, like 2009) forecast was adopted it would have been more in line with what came back in 2010 at the 90% probability.
Whether a person agrees or disagrees with forecasting I think it is important for the public to have some basic understanding and appreciation on what is involved.