January 25, 2025 - By Michelle L’Heureux - Right now, with La Niña conditions currently underway, I guarantee at least one of our readers is currently thinking “This alleged ‘La Niña’ is going to bust so hard in my region. It’s supposed to be DRY and it’s been WET so far. What the heck is wrong with you people!? BUST, BUST, BUST…” It’s frustrating! I get it! That’s because I too am human and get weirdly annoyed when the forecast is for something I want to happen, say 5 inches of snow, and then we end up with dry pavement. But, being a scientist, I also realize that weather and climate predictions contain uncertainty. And uncertainty stinks especially when you really want that outcome to materialize.
So, today I am going to try to explain the inherent uncertainty that we typically see with winter (December–February) La Niña impacts over the United States. No one, I repeat, no one should be surprised when the expected La Niña impact—and by expected, I mean “based on what has happened during past events”—doesn’t happen everywhere we think it may happen. It would honestly be strange if it did! But, with any La Niña, even a weak one like the one we are currently observing, we can still bet on some La Niña-like impacts to arise. That’s true even if the impacts are not constant and across all twelve 3-month averages (“seasons”) that we produce climate outlooks for (They’re all here.). However, in this post, I’m only going to focus on past observations— am not looking at any computer model predictions or outlooks!
Expected precipitation impacts based on past La Niñas
Here is the expected La Niña pattern for precipitation over the US (footnote #2).
La Niña is usually associated with drier conditions across the southern part of the U.S. and wetter conditions to the north. This reflects how La Niña is associated with a more poleward-shifted jet stream that deflects the storm tracks to the north (both Emily and Tom have written some nice explainers).
What if I take what actually happened each winter and give it a score based on how well it resembles the “expected La Niña impacts” pattern above? We’ll call it a match score. In our scoring system, a minus 1 will mean a perfect match to the expected La Niña pattern (the minus sign is because La Niña is the cold phase of ENSO). A score of plus 1 will mean the observed winter precipitation looks perfectly like El Niño. A score of zero will mean that the observed precipitation didn’t look at all like what we’d expect during La Niña or El Niño.
When we calculate these match scores with the expected ENSO precipitation pattern for each winter going back to 1959, we get this time series above. Superimposed onto this time series is a line tracking the status and strength of El Niño and La Niña for the same time period. This line is the Oceanic Niño Index, which you can see in table form here. The two lines clearly are closely related, right? When the Oceanic Niño Index swings upward during El Niño events, match scores tend to be positive, which means the observed precipitation patterns look like those expected with El Niño. When the Oceanic Niño Index swings downward, during La Niña events, there are more negative match scores, which means that the observed precipitation anomalies better resemble the expected La Niña impacts.
Another nifty thing about this graph is that the match score appears to be related to the intensity/strength of ENSO events. That means stronger El Niño and La Niña events tend to have better matches between the expected impact over the United States and what actually occurs (and weaker events have weaker matches). Not always, but usually. We can see this relationship in a different way in the scatterplot below. This figure contains the same data as the time series above except now I’m showing the strength of El Niño or La Niña by putting the Oceanic Niño Index on the horizontal axis and the winter’s match score on the vertical axis.
No such thing as a perfect score, but strong events increase the chances of a good one
The fact that the dots are arranged in a diagonal (as is the “best fit” line shown) is good news if you want to make seasonal climate outlooks! How so? It means we can reasonably predict the match score if we know how strong the El Niño or La Niña will be. The match score indicates how confident we can be that the actual winter pattern will match the expected ENSO pattern. Fortunately, we can often predict the occurrence of El Niño and La Niña some months in advance, and we can even provide some probability for the strength (we generate them every month when CPC’s ENSO discussion is updated).
The bad news is that the match scores don’t ever really get close to perfect scores (+1 or -1), which means, unfortunately, we’re just not ever going to see a perfect match with the expected ENSO pattern. If you use the expectation map as your forecast, it’s just going to bust in places (sorry). Let’s look at three winters when a weaker La Niña was present.
Notice how the observed precipitation patterns deviate from the expected La Niña pattern, with the 2017–18 winter (stronger negative match score), the 2005–06 winter (weaker negative match score), and the 2022–23 winter, which actually had a slightly positive match score (some places, like on the West Coast, resembled the impacts you would see from an El Niño!).
So, what should we expect this winter?
For the current 2024-25 winter, odds favor the ENSO index strength to be somewhere between -0.5° C and -1.0° C. This is not a big La Niña, and so you can see while most of the dots in this range are negative match scores, they are not big numbers. This basically means that history favors a discernable La Niña influence this winter, but when Nat goes back and reviews the season in a couple of months, there are going to be some busts (footnote #3). In fact, we guarantee a bust somewhere, with busts more likely in regions where the historical ENSO relationships are just not as strong (i.e. the regions where you see lighter colored shading on the La Niña impacts map, but not exclusively). On the upside, if you can make your bets over multiple winters and over the entire United States (even larger geographic areas tend to improve odds), you’ll still come out ahead more often than not.
For stronger El Niño or La Niña events, there is a lower element of surprise and more predictability, but even during these events the uncertainty around ENSO impacts will never be completely eliminated (footnote #4). This can happen because there is always something other than ENSO going on (random variability, climate trends, etc.) and it is not unusual when that unexpected “thing” is just not predictable months in advance. For example, here is the 2008-09 La Niña winter which had the largest (negative) match score between the observed precipitation anomalies and the expected La Niña pattern. While it looks very La Niña-ish, there were still some mismatches-- the Pacific Northwest was drier where we would have expected it to be wetter.
So, what is all of this saying? It says that, even without looking at any computer model predictions, ENSO remains one of our most important predictive tools in our seasonal climate outlooks, especially for precipitation (we find relationships are not as strong between ENSO and temperature, which are often dominated by climate trends). And that lines up with what CPC finds when it examines the accuracy of their seasonal precipitation outlooks based on the models —they are usually better during ENSO events. While it will not explain everything that happens this winter and spring, La Niña is likely to partially explain what happens. And that is pretty magical. Science for the win! Now you can take your decision support system, similar to the one Brian wrote about a couple months ago, and start hedging your bets. May the odds be in your favor.
Footnotes
(1) I’m using a strategy that we discussed (open access) in the Bulletin of the American Meteorological Societyafter the end of the 2023-24 El Niño. The “match score” is the pattern correlation, which quantifies the strength of the fit between maps of observed anomalies and the ENSO impacts. One does not have to use pattern correlation—we can use almost any metric that measures the similarity between two patterns (like a projection coefficient or simple hit metric). There are lots of different ways to quantify the similarity between the observed anomalies and a climate pattern.
(2) In the paper above, we used an ENSO regression map which is just the precipitation anomalies regressed on the Oceanic Niño index or December-February average Niño-3.4 index values. The method assumes linearity (i.e., El Niño and La Niña impact patterns are assumed to be mirror opposites), so in the image above, I have multiplied the El Niño map by minus 1 to get the conventional La Niña anomalies.
(3) I’m completely ignoring the newer Relative Oceanic Niño index (RONI) for simplicity in this blog post. But this is something we are keeping our eyes on given we have recently seen relative index values that are about 0.5° C cooler than the traditional values. Nat Johnson has explained that the cooler RONI values could essentially mean that we have better chances of seeing more similarity between this winter’s precipitation anomalies and the La Niña expectation. But please keep in mind that scatterplot above—even if the final 2024-25 dot ends up being shifted 0.5° C to the left (along the x-axis) it doesn’t ensure a better match score. That’s because there is just a lot of intrinsic variability that is not explained by ENSO!
(4) Check out the lower rightmost dot with the 16… this was the 2015-16 El Niño, one of the strongest in history, which didn’t have a lot of the classical El Niño hallmarks over the United States.
A blog about monitoring and forecasting El Niño, La Niña, and their impacts.
Disclaimer:
The ENSO blog is written, edited, and moderated by Michelle L’Heureux (NOAA Climate Prediction Center), Emily Becker (University of Miami/CIMAS), Nat Johnson (NOAA Geophysical Fluid Dynamics Laboratory), and Tom DiLiberto and Rebecca Lindsey (contractors to NOAA Climate Program Office), with periodic guest contributors.
Ideas and explanations found in these posts should be attributed to the ENSO blog team, and not to NOAA (the agency) itself. These are blog posts, not official agency communications; if you quote from these posts or from the comments section, you should attribute the quoted material to the blogger or commenter, not to NOAA, CPC, or Climate.gov.
Source: ENSO blog team