Policy tool or crystal ball?
Our understanding of the greenhouse effect is based on physics and the interpretation of past climate. The severity of future warming will be determined by policy decisions we make today, so when scientists are asked to make predictions, it is the task of climate models to approximate future climate based on different scenarios of greenhouse gas emissions.
“They can’t even predict tomorrow’s weather!”
Most discussions of climate models with skeptics involves the declaration, “They can’t even predict tomorrow’s weather, so how are they going to predict it 100 years from now?”
Climate is not weather. The popular definition “Climate is what you expect, weather is what you get” is true. Climate is the “average weather.” It is what you get if you smooth out the day to day and year to year variations. Weather is overwhelmed by the chaos of the atmosphere and ocean, and it is for this reason you can only predict weather a few days in advance before all of the variables conspire to make the prediction worthless.
For a weather forecast, details are important. It is important to know the approximate temperature at a given location at a given time, or whether it will rain in the afternoon, in the morning or not at all. That is the unenviable task of weather models which are constantly fed current weather conditions in order to provide a reliable forecast. Even with everything that can go wrong with weather models, there have been huge advancements. This graph shows the error rate for 72, 48, and 24 hour forecasts for surface pressure since 1966.
In this case, weather models can now predict 3 days in advance with better success that they could predict just one day in advance 25 years before.
Within the virtual world of the computer, climate models create their own weather, but they do not attempt to recreate actual weather. Whether it rains today, tomorrow, or the next day doesn’t really matter for the climate models, only that the frequency of precipitation matches what is expected for a specific region, for a specific time of year. There is some work being done with climate models to predict short term and large scale changes. In that case, like weather models, the climate models are initialized with current conditions with the hope of better predicting the oscillation of the oceans, which has consequences for the short term climate.
The role of climate models
The climate over the long term, however, is determined by the Earth’s energy balance. That is, how much energy is reaching the earth, how much bounces off, how much is retained, and how long it is retained. Everything else is just a matter of the earth temporarily shifting the energy from one part of the world to another.
Climate models encapsulate our current understanding of the physics of the climate, and how the various components interact with one another. They attempt to answer important questions. i.e. Given an enhanced greenhouse effect, what will happen to regional temperatures? Regional precipitation? Ocean circulation? Snow and ice cover? Vegetation? What will happen to the carbon cycle? What kind of feedbacks will there be? Some questions are easier answered than others, but we don’t need to wait until every question is answered beyond a shadow of a doubt. No one can predict the future, but we have a good idea if emissions continue as they are.
Types of climate models
There are a few kinds of climate models. The first are “simple” mathematical models. Simple is a relative term because they remain very difficult for human beings working alone to compute. In the 1890s, Arrhenius spent months calculating his figures for climate sensitivity (see section 1). These models have names like “radiative-convective” models or “latitudinally varying energy balance models.” They don’t contain much if any geographic information, and they simplify many aspects.
The models that you hear about today are general circulation models (GCMs). These are three dimensional representations of the Earth’s surface, atmosphere, oceans, biosphere and human elements. These require the fastest computers available; they cannot be calculated by hand.
Between the simple models and the GCMs are “earth system models of intermediate complexity” (EMICs). They are simple enough that they can run over thousands of years and can be used to model long term feedbacks.
There are also regional climate models (RCMs). These are higher resolution and improve the realism of specific regions. GCMs feed information from the surrounding areas, and from that, the RCMs attempt to create a more accurate forecast for a specific region. One limitation is that the RCMs don’t send their results back to the GCMs so this is a one way street.
Increasing model complexity
Over the last few decades, models have increased in complexity.
They’ve added features like the carbon cycle and atmospheric chemistry, and they’ve greatly expanded on the interworkings of the ocean.
They’ve also added geographic resolution.
A typical climate model described in the IPCC’s first assessment report (FAR) broke the world into 500 km squares. By the Fourth Assessment Report (AR4), this had been reduced to about 110 km squares. These squares extend upward in layers all the way to stratosphere and downward into the ocean.
Increasing computer power
Needless to say, this additional detail requires huge amounts of computer power. The graph below shows the increase in computational speed of computers related to weather and then climate prediction.
This graph is on a logarithmic scale. Each tick mark on the left is 10 times greater than the one below it. If it was on a linear scale, computer power would shoot up off the chart. There is about a 20 year delay between the computational power of super computers, and what you might find on your desk.
There are three basic steps to validate that a model is capable of giving us reasonable output.
It seems obvious, but the first step is to run it over a long period of virtual years to make sure it isn’t spitting out nonsense. That is, the climate of the virtual world must resemble that of the real world. For example, temperature variations and precipitation patterns should match reality. This graphic compares modeled precipitation patterns to actual patterns.
The graphic on top is the observed mean precipitation, while the graphic on the bottom is the modeled precipitation for all of the climate models that went into AR4. Precipitation patterns largely match reality, although the magnitude of precipitation for given areas may be off somewhat.
Another method is to model the distant past. The same model used to project future scenarios should be able to produce results consistent with what we know of the climate at times when conditions were much different than today. Variables corresponding to the shape of the orbit and tilt of the earth’s axis are changed, which alters the climate of the model. The results are compared to paleo climate data. This is sparse, but in some cases precipitation and temperature is available for certain regions which should match the modeled output.
Finally, you can verify the models against specific events, such as explosve volcanic eruptions. If the earth cools too much, not enough, or not at all, you know that the model has problems. Likewise, the model should produce regional effects similar to observations. This graphic compares actual and modeled reaction of the climate system to the eruption of Mount Pinatubo.
A model should be able to closely approximate the climate of the recent past. We know past greenhouse gas concentrations. We know when volcanoes erupted and the magnitude of these eruptions. We know the emissions of anthropogenic aerosols, and we have an idea of the magnitude of solar activity. Plugging these values into the models should give us a close match to past climate.
This is the result of 58 simulations produced by 14 different climate models for AR4.
The yellow lines represent each individual run. Models are seeded with random values, so each time a model is run, a slightly different result is produced. Averaging the results gives us the red line, which passes straight through the black line, which is our measurement of the global average temperature anomaly. The models recreate the temperature increase of the early 20th century, which is a combination of natural and manmade influences. They recreate the flat period in the mid century, and they recreate the past several decades when anthropogenic influence emerged from natural variability.
Models vs. temperature reconstructions
Using the intermediate climate models that we discussed earlier, we can model the climate going back even further. The models are fed past concentrations of greenhouse gases, reconstructions of solar activity, and known volcanic eruptions.
The grey shaded area is the overlap of all of the temperature reconstructions going back one thousand years, with darker areas indicating more agreement between different reconstructions. The further back in time you go, the more the reconstructions and models disagree, but they agree very well up to about 700 years ago, and even beyond then, fall within the same range of variability.
Anthropogenic warming vs. natural variability
The models allow us to separate anthropogenic and natural forcing. 
In the upper left hand box, the grey shaded area shows the modeled results if only the natural forcing is retained such as solar and volcanic activity. The red line is observations. They don’t match.
The upper right hand graph is the same thing, except with natural forcing kept constant, and only allowing anthropogenic forcing to vary. It matches better, but not that great. The bottom graph shows the modeled results of both natural and anthropogenic forcings combined. This combination best approximates observations. The results are not perfect, but we don’t expect them to be nor do we require them to be perfect.
Anthropogenic warming in context
We can do a similar comparison over the past thousand years, to place modern temperatures in context.
The thin lines are from the same models, except with anthropogenic influences removed.
Accuracy of models
In terms of their ability to predict the future, we obviously have fewer observations for comparison.
All of the model scenarios in this graph begin in 1990. The black dots are actual observations, and the black line is the smoothed temperature increase. The models calculate the anthropogenic global warming signal, bounded above and below by uncertainty. That is, they show the increase in temperatures attributable to the expected changes in greenhouse gases and anthropogenic aerosols, and any modeled feedbacks. Actual temperatures include natural variability that the model scenarios do not attempt to predict. As described in previous sections, these influences include the solar cycle (section 6), explosive volcanic eruptions, and the El Niño/Southern Oscillation and other ocean fluctuations (section 4). The “cold” dots in the early ‘90s are the result of the Mount Pinatubo eruption. The two cold dots in ‘96 and 2000 are due to La Niña, and the warm dot in 1998 is due to El Niño.
The models described in the FAR generally projected more warming than we have seen. This is represented by the blue area on this chart, with the blue line representing the aggregate result. Models from the Second Assessment Report (SAR) generally project less warming than we have observed, represented by the orange area. Models from the Third Assessment Report (TAR) generally project warming consistent with observations, represented by the green shaded area.
A false fingerprint
Skeptics do not trust computer models, and they have spent considerable time attacking model results over the decades (see next section). A new criticism relates to the calculations of warming in the tropical troposphere. According to models, this area of the troposphere should warm the fastest.
Some skeptics consider this to be a “fingerprint” of anthropogenic global warming, without which the theory is falsified (see section 7 for examples of fingerprints). However, this is not the case. Models predict significant warming of the tropical troposphere regardless of the cause of the warming.
These graphs require a bit of explanation. The x axis indicates latitude, with the left representing the south pole, the middle the equator, and the right the north pole. The y axis represents atmospheric pressure - larger numbers indicating more pressure, and thus lower altitudes, while lower numbers represent less pressure and thus higher altitudes. The upper map shows the modeled warming as the result of a 2% increase in solar radiation, whereas the bottom represents warming due to a doubling of pre-industrial levels of CO2. In both cases, the dark red blob in the middle represents warming of the upper tropical troposphere. The major difference between these two graphs is the temperature of the stratosphere. When the cause is solar, the stratosphere warms slightly. When the cause is greenhouse gases, the stratosphere cools, which is an actual fingerprint of an enhanced greenhouse effect. The atmosphere transitions from strong warming to strong cooling very quickly as you move higher.
So what do the observations say? That requires another confusing graph.
This graph represents the temperature trend of the tropics by altitude. The x-axis is the temperature trend. Negative numbers indicate a cooling trend, and positive numbers indicate a warming trend. The y-axis represents altitude . The grey area represents model calculated rate of warming. This grey area is wide because, as we discussed earlier, each time a model is run, a slightly different result is obtained. This is not an error on the part of the models, but an expression of natural variability, especially that associated with El Nino. If the models are correct, measurements should fall within this range.
The problem is that measurements from radiosondes (weather balloons) show very little warming where the models show the most, between 300 and 200 millibars (hPa on this graph, or “hectopascals”). The higher the altitude, the greater the radiosonde measurements diverge from the models. Which should we believe?
Believe the models. In addition to poor coverage in the tropics, radiosondes have cooling biases that are both well known, yet difficult to properly correct for. Some interpretations of radiosonde temperature data correspond with the models better than others, such as the dark green line above. Identifying and correcting flaws is ongoing work.
An alternative is to derive temperature from radiosonde wind measurements, which have been recorded more consistently over the years. The result is shown by the pink line above, with the uncertainty represented by horizontal lines. These values correspond with the models. Below shows the derived and modeled warming by latitude and altitude as before.
The top graphs represent the period from 1970 to 2005, while the bottom represents 1979 to 2005. On the left is the temperature derived from the radiosonde wind data, while the right is the modeled temperature change. Areas where there are no measurements for comparison are left blank. The enhanced warming of the upper tropical troposphere is clearly evident.
Satellite measurements also show less warming at higher altitudes than the models, but they too are hardly authoritative. As discussed in section 4, the two major satellite analyses differ in their calculated rates of warming, not only for the lower troposphere but other altitudes as well. This implies a rather large uncertainty inherit in the satellite analyses due to the assumptions that go into each analysis and the limitations of the measurements. This uncertainty overlaps with the range of outcomes simulated by the models.
Remaining model uncertainties
There are, of course, many legitimate uncertainties when it comes to models, which make it difficult to precisely quantify the amount of warming.
Cloud feedbacks are perhaps the largest problem. Despite the increasing resolution of climate models, clouds are still far too small in relation to the size of the model grid boxes, so approximations must be used. The chart below shows the cloud feedback for one of the scenarios in AR4 as described by different models. Some calculate a positive feedback, but most calculate a negative feedback due to greenhouse warming.
The observations we have of cloud cover are of poor quality, so it is difficult to validate whether a model accurately recreates the trends of past cloud cover. As you might recall from the previous discussion on clouds (see section 6), the data we have for low level clouds are not accurate enough to even tell us if they are increasing or decreasing.
Another major uncertainty within the models is how they treat aerosols. Black carbon, or “soot,” falling on snow and ice causes significant warming in the arctic, and so-called “atmospheric brown clouds” cause warming of the lower atmosphere but cooling at the surface. Sulfate aerosols have a direct cooling effect, and they enhance clouds, making them more reflective which also causes cooling. Many aerosols undergo chemical changes once emitted into the atmosphere, which compounds the uncertainty. Because of these problems, the range of uncertainty for aerosols is quite large, although overall they are considered cooling influences.
Other uncertainties have to do with the speed and magnitude of feedbacks such as reduced albedo due to melting polar ice, methane release from permafrost, and CO2 saturation of the ocean and plant life, but all of these worsen warming.
Such uncertainties are represented by the range of possible outcomes for each IPCC scenario, based on the output of different models.
Temperature is on the left and precipitation is on the right. The dark black lines are the “ensemble mean,” or the average of all of the models.
Choose your path
Models output a lot of information regarding the climate, but it is the projection of global temperature that receive the most attention. This graph summarizes the results of four different emissions scenarios from AR4. 
The bottom yellow line is what would happen if the atmosphere was frozen as it is today. The blue line represents a scenario that brings us to 2 ° warmer than current temperatures. This represents the very upper limit, beyond which is certainly dangerous. The actual dangerous level is probably even lower, at about 1 ° warmer than present. The other two scenarios would be a disaster for civilization (see section 11).
 (Houghton, 2004)
 (Le Treut, et al., 2007) Online here
 (Houghton, 2004)
 (Randall, et al., 2007) Online here. Figure 8.5
 (Hansen, et al., 1996) Brief here
 (Randall, et al., 2007) Online here. FAQ 8.1 Figure 1.
 (Jansen, et al., 2007) Online here. Figure 6.13 (cropped).
 (Jansen, et al., 2007) Online here. Figure 6.14 (cropped).
 (Le Treut, et al., 2007) Online here. Figure 1.1.
 (Realclimate Group, 2007) Online here
 (Thorne, 2008) Online here (free registration required)
 (Allen & Sherwood, 2008) Online here (free registration required)
 (Meehl, et al., 2007) Online here. Figure 10.11 (cropped).
 Ibid. Figure 10.5.
 Ibid. Figure 10.4.
Sources cited in Climate Models
Allen, R. J., & Sherwood, S. C. (2008). Warming maximum in the tropical upper troposphere deduced from thermal winds. Nature Geoscience , 399-403.
Hansen, J., Sato, M., Ruedy, R., A., L., Asamoah, K., Borenstein, S., et al. (1996). A Pinatubo climate modeling investigation. In G. Fiocco, D. Fua, & G. Visconti (Eds.), The Mount Pinatubo Eruption: Effects on the Atmosphere and Climate (pp. 233-272). Springer-Verlag.
Houghton, J. (2004). Global Warming: The Complete Briefing, third edition. New York: Cambridge University Press.
Jansen, E., Overpeck, K., Briffa, J., Duplessy, C., Joos, F., Masson-Delmotte, V., et al. (2007). Paleoclimate. In S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. Averyt, et al. (Eds.), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press.
Le Treut, H., Somerville, R., Cubasch, U., Ding, Y., Mauritzen, C., Mokssit, A., et al. (2007). Historical Overview of. In S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. Averyt, et al. (Eds.), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report (pp. 93-127). Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press.
Meehl, G., Socker, T., Collins, W., Friedlingstein, P., Gaye, A., Gregory, J., et al. (2007). Global Climate Projections. In S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. Averyt, et al. (Eds.), Climate Change 2007: The Physical Science Basis. (pp. 747-845). Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press.
Mitchell, J., Karoly, D., Hegerl, G., Zwiers, F., Allen, M., Marengo, J., et al. (2001). Detection of Climate Change and Attribution of Causes. In J. Houghton, Y. Ding, D. Griggs, M. Noguer, P. van der Linden, X. Dai, et al. (Eds.), Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (pp. 695-738). Cambridge, United Kingdom and New York, USA: Cambridge University Press.
Randall, D., Wood, R., Bony, S., Colman, R., Fichefet, T., Fyfe, J., et al. (2007). Cilmate Models and Their Evaluation. In S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. Averyt, et al. (Eds.), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (pp. 589-662). Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press.
Realclimate Group. (2007, December 12). Tropical Tropospheric Trends. Retrieved June 27, 2008, from Realclimate: http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
Thorne, P. (2008). The answer is blowing in the wind. Nature Geoscience , 1, 347-348.