The prog-files can be seen as a snapshot of the forecasting system at the time for an export, so when evaluating the forecasts saved in the prog-files, you are depending on the data and model settings at the time for each export. That means that if you have had for a period quality problem with the incoming historical outcome data for a series which have affected the forecasts during that period, even if that data have been corrected afterwards, the corresponding saved forecasts cannot be corrected. The follow up will then show large deviations for the saved forecasts that can’t be explained by the current updated outcome history. In this case, if we instead recreate forecasts based on the updated history and evaluate these, if these forecasts have a better performance, we can rule out any problems with the model settings and instead conclude that the poor quality for the saved forecasts is due to in-data problems.
Also, problems with the model settings can be found. By changing the model settings and recreate forecasts based on the new model settings, you can compare these new model settings with the saved forecasts or with recreated forecasts based on the historical model settings and quickly decide whether the new model settings are favourable.
If a user wishes to quickly evaluate a new weather forecast provider, instead of setting up the forecasting with the new provider and wait for a result until the system has saved enough prog-files, the user can now import historical weather forecasts into Aiolos and directly evaluate recreated forecasts based on these historical weather forecasts and compare these with the other providers. This will both save a considerable amount of time, and the user doesn’t have to risk that some problems with the export, in-data or model settings have made the result stored in the prog-files useless. Additionally, weights for the new weather forecasts can be calculated immediately based on the imported historical weather forecasts.