StratoQ, on 27 January 2012 - 14:19, said:
In recent days we have seen quite large splits in the ensembles. Many of the members going for cold solutions and just as many for a milder outcome. Using a 10 pin bowling analogy, this is like a "7-10 Split", the most difficult of all to convert to a spare. Equally these model solutions don't inspire much confidence to the average viewer.
My question is : is there a programming "learn" facility built into the models whereby previous runs are re-analised to see which member came closest and fine tune the rest? Or anything similar?
Modern numerical models have no 'heuristic' ability, ie. they don't learn from past 'errors' or 'successes'. However, they undergo changes, upgrades, testing and evaluation which (usually) results in improvements in their performance, as assessed by various forecast skill scores deemed useful for assessing this performance.
Unfortunately, 10 pin bowling isn't a very good analog for ensembles, ;-). But ensemble forecasts are currently considered one good way of tackling the difficullties of modelling the chaotic atmosphere/climate system and inherent forecast uncertainty. I guess as ensemble forecasting is still in it's infancy understanding is yet to filter down to 'non-specialists'.
Modern climate/global/mesoscale models include representations of all the physical quantities / variables Tony mentions. Eg. the WRF model (for real data implementations) routinely incorporates terrain (at a suitable resolution for the grid), land surface representation ( variable/ seasonal from satellite/quasi homogeneous datasets), SSTs (seasonal or satellite &/ buoy datasets) including the urban land surface. Also variables/ quantities important for radiative processes such as trace gases as Ozone/ CO2 can be assimilated/represented.
This post has been edited by smartie: 27 January 2012 - 20:06