Global warming has trapped an explosive amount of energy in Earth's atmosphere in the past half century — the equivalent of about 25 billion atomic bombs, a new study finds.
In the paper, published April 17 in the journal Earth System Science Data, an international group of researchers estimated that, between 1971 and 2020, around 380 zettajoules — that is, 380,000,000,000,000,000,000,000 joules — of energy has been trapped by global warming.
Such a big number is hard to put into context. But two researchers, who were not involved in the study, have put it into perspective by comparing the energy to that released by nukes. However, even then, the amount is still hard to wrap your head around.
In an article for The Conversation, Andrew King, a climate scientist at the University of Melbourne in Australia, and Steven Sherwood, a climate scientist at the University of New South Wales in Sydney, calculated that 380 zettajoules is equivalent to around 25 billion times the energy released during the detonation of "Little Boy," the atomic bomb dropped on Hiroshima, Japan, on Aug. 6, 1945.
Even more mind-blowing, the energy absorbed by the planet during this time period likely equates to only around 60% of total greenhouse gas emissions, so the actual number is even higher, King and Sherwood wrote.
But such a large amount of energy is also puzzling, because based on that amount of heat being trapped in the atmosphere, the average global temperature should have risen by dozens of degrees since preindustrial times, rather than by the 2.2 degrees Fahrenheit (1.2 degrees Celsius) that we have observed, the pair wrote. So where has all this extra energy gone?
According to the study, the oceans have absorbed around 89% of the energy (338.2 zettajoules), land has absorbed 6% (22.8 zettajoules), 4% (15.2 zettajoules) has melted parts of the cryosphere — the part of Earth's climate system that includes snow, sea ice, freshwater ice, icebergs, glaciers and ice caps, ice sheets, ice shelves and permafrost — and just 1% (3.8 zettajoules) has remained in the atmosphere.
The majority of the heat absorbed by the seas is trapped in the upper 0.6 mile (1 kilometer) of the oceans. This has spared humanity from the brunt of climate change so far, but it has also caused massive increases in sea surface temperatures, which has accelerated polar melting, damaged marine ecosystems, increased the severity of tropical storms and begun to disrupt ocean currents.
However, the oceans will not protect our planet forever, King and Sherwood wrote, so we must begin rapidly decreasing greenhouse gas emissions by decarbonizing the global economy to ensure our future survival. "We're in a race, and the stakes are as high as they could possibly be — ensuring a liveable climate for our children and for nature," they wrote.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Harry is a U.K.-based staff writer at Live Science. He studied Marine Biology at the University of Exeter (Penryn campus) and after graduating started his own blog site "Marine Madness," which he continues to run with other ocean enthusiasts. He is also interested in evolution, climate change, robots, space exploration, environmental conservation and anything that's been fossilized. When not at work he can be found watching sci-fi films, playing old Pokemon games or running (probably slower than he'd like).
It is astonishing that it hasn’t spontaneously exploded due to its own entropic forces.Reply
Ahhh The lib pin head doom and gloom is alive and well How about just go back to your mommies basement and hide. My goodness peopleReply
So here we are, contributing a massive 4% of the yearly circle of carbon dioxide, hereby written as CO2.Reply
Let us, for the fun of it, say that humans manages to lower their reintroduction of CO2 to 0%.
That means that it would take 25 years for it to be missing 1 year of CO2 from humans.
the planet still produces 94% of the CO2.
so how will limiting 4% make anything stop?
I think you meant to say that here you are, spewing more stupidity and nonsense because you don't understand the science, can't perform math, and have nothing worthwhile to say, so it must be untrue. Get an education before life kills you.Warthunder727 said:Ahhh The lib pin head doom and gloom is alive and well How about just go back to your mommies basement and hide. My goodness people
Flawed reasoning. The planet isn't "producing" 94% of the C02.Debed said:So here we are, contributing a massive 4% of the yearly circle of carbon dioxide, hereby written as CO2.
Let us, for the fun of it, say that humans manages to lower their reintroduction of CO2 to 0%.
That means that it would take 25 years for it to be missing 1 year of CO2 from humans.
the planet still produces 94% of the CO2.
so how will limiting 4% make anything stop?
Well, it is more closer to 96%John Henry said:Flawed reasoning. The planet isn't "producing" 94% of the C02.
Let's see if I have this correct. At the beginning of our planet's existence, the temperature was around 3,600 degrees F. It then cooled down to somewhere around 15 to 20 degrees F in the early 1800's. We then invented the internal combustion engine and the temperature reversed and is headed upwards to, Heaven knows where??Reply
I don;t see how that is connected to the article, sorry.jacqwayne said:Let's see if I have this correct. At the beginning of our planet's existence, the temperature was around 3,600 degrees F. It then cooled down to somewhere around 15 to 20 degrees F in the early 1800's. We then invented the internal combustion engine and the temperature reversed and is headed upwards to, Heaven knows where??
I am curious where you get the 15-20degF , early 1800s though. Do you mean localised or globally.
"Global warming has trapped an explosive amount of energy in Earth's atmosphere in the past half century — the equivalent of about 25 billion atomic bombs, a new study finds." See above. However the megatons involved in each bomb, in order to complete a proper statistical analysis must be determined and listed.Reply
The energy content of bombs and explosions is measured in equivalent tons of TNT. A one-kiloton explosion is equivalent to detonating one-thousand tons of TNT, also a one-megaton is equivalent of one-million tons of TNT, all in a perfect world. The explosion of one ton of TNT releases approximatly 4.2 × 10*12 joules of energy; for comparison, it takes almost 6.0 ×10*4 joules to warm up a cup of coffee. The Trinity test, a plutonium fueled bomb had an estimated yield of 21 kilotons, and left a crater 2.9 meters deep and 335 meters wide.
The energy released by nuclear weapons is measured in tons, kilotons (thousands of tons), or megatons (millons of tons) of TNT. In international standard units (SI), one ton of TNT is equal to 4.184 × 109 joule (J).
The foregoing explosions are measured in terms of how much TNT (or trinitrotoluene) you would need to create an explosion of equivalent size. But that's where things get complicated. Why? At its core, an explosion is a big chemical reaction that releases energy. But, depending on the quality of TNT, it's level of moisture and its age all determine its blast measurement and that energy might range from 2000 to 6000 Joules. For the sake of measuring explosions, scientists use a constant 4184 Joules per gram to represent that range.
The problem, writes Chris Mills for Gizmodo — one that has a lot to do with how scientists measure explosions.
That's all pretty arbitrary, says Mills. Though he suggests scientists abandon the imperial system of measurement altogether and adopt a standard explosion measurement like Joules instead, a more effective solution which doesn't seem likely any time soon. The qualitative determination of any blast results from the fact that the seismological monitors in question do not give a completely accurate assessment of the explosion’s magnitude and, hence, hinder the ability to quantify any major blast, nuclear or manmade, into an exact, or nearly exact, TNT equivalent.
See: http://large.stanford.edu/courses/2012/ph241/miller1/See: https://www.smithsonianmag.com/smart-news/how-do-we-measure-explosions-180956271/
Beyond the inexact determination of an equivalency into TNT, we need to return to meteorology.
Could the sheets of gray clouds that hang low over the ocean disappear suddenly in a decided warming trend?
Yes, if you believe a study published yesterday in Nature Geoscience—and the amplifying media coverage of it. If atmospheric carbon dioxide (CO2) levels triple—an unlikely, but not implausible scenario given past rates of human and natural emissions—these stratocumulus clouds may tend to vanish in a frightening feedback loop. Fewer of the cooling clouds would mean a warmer Earth, which in turn would mean fewer clouds, leading to an 8°C jump in warming—a staggering, world-altering change.
But many climate scientists who research clouds are pushing back against this study, arguing that its analysis of one small patch of atmosphere does not apply to the entire globe. It's a very "simple model essentially has a knob with two settings," says Joel Norris, a cloud scientist at the Scripps Institution of Oceanography in San Diego, California. "But it is very likely that the Earth has more knobs than two."
As sophisticated as they are, climate models have a hard time dealing with clouds. Condensing moisture and turbulent air form clouds at scales smaller than models can directly simulate, so instead they use approximations for this behavior. To understand clouds better, scientists have instead developed high-resolution eddy simulations, which re-create the life of small parcels of the atmosphere, including key physics of cloud formation that climate models can't handle directly.
Several years ago, a project comparing six leading eddy simulations looked at how just a 2°C temperature rise influenced low ocean clouds. Two dynamics emerged that caused the clouds to thin, exacerbating warming. First, higher temperatures allowed more dry air to penetrate thin clouds from above, preventing them from thickening and reflecting more of the sun's energy. Second, increased CO2 levels trapped heat near the cloud tops, preventing their cooling. Because such cooling drives the turbulence that forms clouds, the effect could impede cloud formation, fueling further warming. If emissions continued, it seemed plausible that these low clouds would melt away.
The frustration with how poorly global models handle clouds was a primary reason that Tapio Schneider, a climate dynamicist at the California Institute of Technology (Caltech) in Pasadena and the new study's lead author, began construction of a new climate model last year. Dubbed the Climate Machine, it would use artificial intelligence to learn from eddy simulations and satellite observations to improve its rendering of clouds. Doing so first meant building, with his team, their own eddy simulation, one that could dynamically interact, or couple, with the ocean, allowing the simulated clouds to spur warming and vice versa.
The new study, using an earlier eddy simulation built before the Climate Machine, shows the same feedbacks that others had previously identified. But Schneider ran it for much higher CO2 concentrations than most had done. As levels reached 1200 parts per million—three times what they are today, and a number that could be reached next century if no effort is made to stop climate change—the low cloud decks rapidly withered away.
The model results themselves look solid, if not particularly novel. Several cloud scientists, however, object to the next step Schneider took: extrapolating the results of his eddy simulation, which represents only one spot that seems prone to cloud loss, to every area with similar stratocumulus cloud decks. Doing so resulted in all of these clouds disappearing nearly at once, allowing much more of the sun's energy to suddenly be absorbed by the dark ocean. It's a stretch to think the clouds and ocean would link together in such a simple way, says Bjorn Stevens, a climate scientist at the Max Planck Institute for Meteorology in Hamburg, Germany. "This coupling is done in a manner which does not give one confidence in the result."
There's no doubt these feedbacks will be in play. Past work has shown it, says Chris Bretherton, a cloud scientist at the University of Washington in Seattle. "But they'd all happen at different times in different concentrations of CO2 in different places. That would smooth it all out." There wouldn't be a sudden tipping point where all the clouds disappeared. It would happen gradually, subject to the complex response of the ocean and atmosphere. "That's where I take issue with this," Bretherton says. "I think the tipping point is not right."
Indeed, the new model is so simple, lacking things such as the noise of weather, that it can only simulate rapid transitions, adds Stephen Klein, an atmospheric scientist at Lawrence Livermore National Laboratory in California. "Because of those simplifications, I don't find the ‘tipping point' nature of their work to be believable."
Schneider stands by his interpretation. "I looked for all possible reasons to be wrong but ran out of them," he says. The main implication, he adds, is that climate models need to be better equipped to handle clouds. "We shouldn't be complacent about trusting models to predict the future into the 22nd century. There could be other things that models don't quite capture."
Bretherton says more cloud-resolving models are on their way. "Within the next few years, we will have global models that will do what does in a more defensible way." Bretherton is the midst of developing such a model himself, which also relies on eddy simulations to power its simulations. To his surprise, he added, initial runs seemed to suppress the warming feedbacks for these clouds more than expected.
The Caltech climate model, meanwhile, will take another few years to come together. But it's no coincidence that Schneider began to push to develop the model once, 2 years ago, he witnessed his eddy simulation eliminating clouds.
It will be an interesting test to see whether that tendency extends to the Climate Machine he's developing, adds Matthew Huber, a paleoclimatologist at Purdue University in West Lafayette, Indiana. The global model might catch this type of dynamic—or it could show that the climate system overall somehow buffers such "tippiness" at smaller scales out of its system. "That is indeed the only reason to develop this new model," he says, "to predict climate surprises."
*Correction, 5 March, 10:10 a.m.: An earlier version of this story stated that the Climate Machine will be powered by the eddy simulation used in this study. The model will instead use a new, purpose-built eddy code.
The atmosphere is about 0.8˚ Celsius warmer than it was in 1850. Given that the atmospheric concentration of carbon dioxide has risen 40 percent since 1750 and that CO2 is a greenhouse gas, a reasonable hypothesis is that the increase in CO2 has caused, and is causing, global warming.
But a hypothesis is just that. We have virtually no ability to run controlled experiments, such as raising and lowering CO2 levels in the atmosphere and measuring the resulting change in temperatures. What else can we do? We can build elaborate computer models that use physics to calculate how energy flows into, through, and out of our planet’s land, water, and atmosphere. Indeed, such models have been created and are frequently used today to make dire predictions about the fate of our Earth.
The problem is that these models have serious limitations that drastically limit their value in making predictions and in guiding policy. Specifically, three major problems exist. They are described below, and each one alone is enough to make one doubt the predictions. All three together deal a devastating blow to the forecasts of the current models.
Measurement ErrorImagine that you’re timing a high school track athlete running 400 meters at the beginning of the school year, and you measure 56 seconds with your handheld stopwatch that reads to ±0.01 seconds. Imagine also that your reaction time is ±0.2 seconds. With your equipment, you can measure an improvement to 53 seconds by the end of the year. The difference between the two times is far larger than the resolution of the stopwatch combined with your imperfect reaction time, allowing you to conclude that the runner is indeed now faster. To get an idea of this runner’s improvement, you calculate a trend of 0.1 seconds per week (3 seconds in 30 weeks). But if you try to retest this runner after half a week, trying to measure the expected 0.05-second improvement, you will run into a problem. Can you measure such a small difference with the instrumentation at hand? No. There’s no point in even trying because you’ll have no way of discovering if the runner is faster: the size of what you are trying to measure is smaller than the size of the errors in your measurements.
Scientists present measurement error by describing the range around their measurements. They might, for example, say that a temperature is 20˚C ±0.5˚C. The temperature is probably 20.0˚C, but it could reasonably be as high as 20.5˚C or as low as 19.5˚C.
Now consider the temperatures that are recorded by weather stations around the world.
Patrick Frank is a scientist at the Stanford Synchrotron Radiation Lightsource (SSRL), part of the SLAC , the National Accelerator Laboratory at Stanford University. Frank has published papers that explain how the errors in temperatures recorded by weather stations have been incorrectly handled. Temperature readings, he finds, have errors over twice as large as generally recognized. Based on this, Frank stated, in a 2011 article in Energy & Environment, “…the 1856–2004 global surface air temperature anomaly with its 95% confidence interval is 0.8˚C ± 0.98˚C.” The error bars are wider than the measured increase. It looks as if there’s an upward temperature trend, but we can’t tell definitively. We cannot reject the hypothesis that the world’s temperature has not changed at all.
The Sun’s EnergyClimate models are used to assess the CO2-global warming hypothesis and to quantify the human-caused CO2 “fingerprint.”
How big is the human-caused CO2 fingerprint compared to other uncertainties in our climate model? For tracking energy flows in our model, we use watts per square meter (Wm–2). The sun’s energy that reaches the Earth’s atmosphere provides 342 Wm–2—an average of day and night, poles and equator—keeping it warm enough for us to thrive. The estimated extra energy from excess CO2—the annual anthropogenic greenhouse gas contribution—is far smaller, according to Frank, at 0.036 Wm–2, or 0.01 percent of the sun’s energy. If our estimate of the sun’s energy were off by more than 0.01 percent, that error would swamp the estimated extra energy from excess CO2. Unfortunately, the sun isn’t the only uncertainty we need to consider.
Cloud ErrorsClouds reflect incoming radiation and also trap it as it is outgoing. A world entirely encompassed by clouds would have dramatically different atmospheric temperatures than one devoid of clouds. But modeling clouds and their effects has proven difficult. The Intergovernmental Panel on Climate Change (IPCC), the established global authority on climate change, acknowledges this in its most recent Assessment report, from 2013:
The simulation of clouds in climate models remains challenging. There is very high confidence that uncertainties in cloud processes explain much of the spread in modelled climate sensitivity.
What is the net effect of cloudiness? Clouds lead to a cooler atmosphere by reducing the sun’s net energy by approximately 28 Wm–2. Without clouds, more energy would reach the ground and our atmosphere would be much warmer. Why are clouds hard to model? They are amorphous; they reside at different altitudes and are layered on top of each other, making them hard to discern; they aren’t solid; they come in many different types; and scientists don’t fully understand how they form. As a result, clouds are modeled poorly. This contributes an average uncertainty of ±4.0 Wm–2 to the atmospheric thermal energy budget of a simulated atmosphere during a projection of global temperature. This thermal uncertainty is 110 times as large as the estimated annual extra energy from excess CO2. If our climate model’s calculation of clouds were off by just 0.9 percent—0.036 is 0.9 percent of 4.0—that error would swamp the estimated extra energy from excess CO2. The total combined errors in our climate model are estimated be about 150 Wm–2, which is over 4,000 times as large as the estimated annual extra energy from higher CO2 concentrations. Can we isolate such a faint signal?
In our track athlete example, this is equivalent to having a reaction time error of ±0.2 seconds while trying to measure a time difference of 0.00005 seconds between any two runs. How can such a slight difference in time be measured with such overwhelming error bars? How can the faint CO2 signal possibly be detected by climate models with such gigantic errors?
Even the relationship between CO2 concentrations and temperature is complicated.
The glacial record shows geological periods with rising CO2 and global cooling and periods with low levels of atmospheric CO2 and global warming. Indeed, according to a 2001 article in Climate Research by astrophysicist and geoscientist Willie Soon and his colleagues, “atmospheric CO2 tends to follow rather than lead temperature and biosphere changes.”
A large proportion of the warming that occurred in the 20th century occurred in the first half of the century, when the amount of anthropogenic CO2 in the air was one quarter of the total amount there now. The rate of warming then was very similar to the rate of warming recently. We can’t have it both ways. The current warming can’t be unambiguously caused by anthropogenic CO2 emissions if an earlier period experienced the same type of warming without the offending emissions.
Climate Model Secret Sauce
It turns out that climate models aren’t “plug and chug.” Numerous inputs are not the direct result of scientific studies; researchers need to “discover” them through parameter adjustment, or tuning, as it is called. If a climate model uses a grid of 25x25-kilometer boxes to divide the atmosphere and oceans into manageable chunks, storm clouds and low marine clouds off the California coast will be too small to model directly. Instead, according to a 2016 Science article by journalist Paul Voosen, modelers need to tune for cloud formation in each key grid based on temperature, atmospheric stability, humidity, and the presence of mountains. Modelers continue tuning climate models until they match a known 20th century temperature or precipitation record. And yet, at that point, we will have to ask whether these models are more subjective than objective. If a model shows a decline in Arctic sea ice, for instance—and we know that Arctic sea ice has, in fact, declined—is the model telling us something new or just regurgitating its adjustments?
Climate Model Errors
Before we put too much credence in any climate model, we need to assess its predictions. The following points highlight some of the difficulties of current models.
Vancouver, British Columbia, warmed by a full degree in the first 20 years of the 20th century, then cooled by two degrees over the next 40 years, and then warmed to the end the century, ending almost where it started. None of the six climate models tested by the IPCC reproduced this pattern. Further, according to scientist Patrick Frank in a 2015 article in Energy & Environment, the projected temperature trends of the models, which all employed the same theories and historical data, were as far apart as 2.5˚C.
According to a 2002 article by climate scientists Vitaly Semenov and Lennart Bengtsson in Climate Dynamics, climate models have done a poor job of matching known global rainfall totals and patterns.
Climate models have been subjected to “perfect model tests,” in which the they were used to project a reference climate and then, with some minor tweaks to initial conditions, recreate temperatures in that same reference climate. This is basically asking a model to do the same thing twice, a task for which it should be ideally suited. In these tests, Frank found, the results in the first year correlated very well between the two runs, but years 2-9 showed such poor correlation that the results could have been random. Failing a perfect model test shows that the results aren’t stable and suggests a fundamental inability of the models to predict the climate.
The ultimate test for a climate model is the accuracy of its predictions. But the models predicted that there would be much greater warming between 1998 and 2014 than actually happened. If the models were doing a good job, their predictions would cluster symmetrically around the actual measured temperatures. That was not the case here; a mere 2.4 percent of the predictions undershot actual temperatures and 97.6 percent overshot, according to Cato Institute climatologist Patrick Michaels, former MIT meteorologist Richard Lindzen, and Cato Institute climate researcher Chip Knappenberger. Climate models as a group have been “running hot,” predicting about 2.2 times as much warming as actually occurred over 1998–2014. Of course, this doesn’t mean that no warming is occurring, but, rather, that the models’ forecasts were exaggerated.
If someone with a hand-held stopwatch tells you that a runner cut his time by 0.00005 seconds, you should be skeptical. If someone with a climate model tells you that a 0.036 Wm–2 CO2 signal can be detected within an environment of 150 Wm–2 error, you should be just as skeptical.
As Willie Soon and his coauthors found, “Our current lack of understanding of the Earth’s climate system does not allow us to determine reliably the magnitude of climate change that will be caused by anthropogenic CO2emissions, let alone whether this change will be for better or for worse.”
A systematic error causes a measured value to be consistently greater or less than the true value. The amount by which the value differs from the true value may be a constant. Such a situation would occur, for example, when using a micrometer that has a ‘zero error’: the scale of the micrometer indicating a non-zero value when the jaws of the micrometer are closed. In other circumstances, a systematic error may be proportional to the magnitude of the quantity being measured. For example, if a wooden metre rule has expanded along its whole length as a consequence of absorbing moisture, the size of the systematic error is not constant but increases with the size of the object being measured.
Systematic errors may be revealed in two ways: by means of specific information or when the experimental set-up is changed (whether intentionally in order to identify systematic errors, or for some other reason). In both cases a good understanding of the science underlying the measurement is necessary and critical. In general, statistical analysis may or may not be involved in assessing the uncertainty associated with a systematic error, so this uncertainty may be Type A or B. When the effect of random errors has been minimised, for example by taking the mean of many values, the influence of systematic errors will remain unless they too have been identified and corrected for.
Since a systematic error does not necessarily cause measured values to vary, it often remains hidden (and may be larger than the random errors).