Volts podcast: Fran Moore on how to represent social change in climate models
vor 3 Jahren
Podcast
Podcaster
A newsletter, podcast, & community focused on the technology, politics, and policy of decarbonization. In your inbox once or twice a week.
Beschreibung
vor 3 Jahren
In this episode, UC Davis assistant professor Fran Moore
discusses her research team’s effort to construct a climate model
that includes (instead of ignores) effects from the interplay of
social conditions and policy change.
transcript)
(Active
transcript)
Text transcript:
David Roberts
One of my long-time gripes about the climate-economic models that
outfits like the IPCC produce is that they ignore politics. More
broadly, they ignore social change and the way it can both drive
and be driven by technology and climate impacts.
This isn’t difficult to explain — unlike technology costs,
biophysical feedbacks, and other easily quantifiable variables,
the dynamics of social change seem fuzzy and qualitative, too
soft and poorly understood to include in a quantitative model.
Consequently, those dynamics have been treated as “exogenous” to
models. Modelers simply determine those values, feed in a set
level of policy change, and the models react. Parameters internal
to the model can not affect policy and be affected by it in turn;
models do not capture socio-physical and socio-economic feedback
loops.
But we know those feedback loops exist. We know that falling
costs of technology can shift public sentiment which can lead to
policy which can further reduce the costs of technology. All
kinds of loops like that exist, among and between climate,
technology, and human social variables. Leaving them out entirely
can produce misleading results.
At long last, a new research paper has tackled this problem
head-on. Fran Moore, an assistant professor at UC Davis working
at the intersection of climate science and economics, took a stab
at it in a recent Nature paper, “Determinants of emissions
pathways in the coupled climate–social system.” Moore, along with
several co-authors, attempted to construct a climate model that
includes social feedback loops, to help determine what kinds of
social conditions produce policy change and how policy change
helps change social conditions.
I am fascinated by this effort and by the larger questions of how
to integrate social-science dynamics into climate analysis, so I
was eager to talk to Moore about how she constructed her model,
what kinds of data she drew on, and how she views the dangers and
opportunities of quantifying social variables.
Without further ado, Fran Moore, welcome to Volts. Thanks so much
for coming.
Fran Moore:
Thanks for having me.
David Roberts:
In climate modeling, we put in values for what we think is going
to happen to the price and then watch the model play out. I've
been looking at climate modeling my whole career, and I've always
thought that what's actually going to determine the outcomes are
our social and political processes, which are not in the model.
So really, the models amount to a wild guess, we're all wallowing
in uncertainty, and we just have to live with it.
You confronted the same situation, and being a much more stalwart
and ambitious person than I, said, “I'm going to try to get the
social and political stuff into the model to make the model
better.”
In conventional climate modeling, these sociopolitical variables
are treated as exogenous. What does it mean for them to be
exogenous to the model?
Fran Moore:
Exogenous means that they come in from outside, so as the
researcher using the model, you have to specify that. In
particular, when we're thinking about climate change, those
really important exogenous variables are the ambition of climate
policy, whether that be in terms of trajectories of carbon
prices, or target for temperature, or target for emissions
levels. Typically, those are things that you set and they appear
exogenously in two ways.
One is, in climate modeling, you take some radiative forcing
trajectory, or some greenhouse gas concentration, and you ask,
what does the climate system do in response to that? But it also
comes up in other types of modeling, like energy modeling, where
these policies appear exogenously as constraints on the model. So
you're asking an energy model to tell you what's the least cost
pathway for getting to a 2° temperature target, or to a certain
carbon concentration limit in the atmosphere. Those are both
versions of exogenous inputs of policy into climate-relevant
modeling.
David Roberts:
The upshot is that the modeler is basically specifying the
trajectory of policy and then asking the model: given that, what
will happen? What it means to make it endogenous, then, is
allowing social and political factors to be affected by other
variables and to affect them in return inside the model. What
does it look like for something like this to be endogenous? What
does that mean to us?
Fran Moore:
The way it works in our model is that climate policy becomes
endogenous. We don't specify what it does; it arises from
modeling of more fundamental social-political processes that we
think are going to drive or enable climate policy as it might
play out over the future. By taking that step back we see this
policy not just as something that we're going to specify and ask
what happens, but actually something that emerges from the system
itself, that comes out of a model’s structure and
parameterization.
David Roberts:
On one hand, that seems like exactly what we want: let’s specify
some initial conditions, then ask the model what people will do
on policy later. On the other hand, intuitively, it sounds
impossible, like trying to predict the future.
When I think about social and political forces and variables,
there are an infinite number of ways to conceive of them. Of all
the social and political forces you could imagine, how do you
narrow down to something manageable? Which variables are you
choosing?
Fran Moore:
Let me take a minute to say that it's actually not obvious that
this is what you want to do. A lot of climate modeling has taken
the view that the goal of this is to inform policy, and the goal
of the modeling is to say to policymakers “if you do X then Y
will happen and if you do Z then W will happen.” If that is the
goal of your modeling exercise, then you don't want policy to
emerge endogenously; you want to be able to specify some possible
counterfactuals so you can take the results of your model and
tell policymakers about just how bad climate change will be under
these different cases.
I see two main reasons why that is unsatisfactory as the only
approach. One, scientifically, it seems unsatisfying, in that
human decisions are the single most important determinant of how
the climate system is going to evolve and if we just exclude them
from our modeling, we don't really understand the system as a
whole.
But there's also a practical application, in that we're not just
here trying to inform mitigation decisions. Increasingly, we're
trying to help adaptation. And not being able to tell adaptation
decision-makers about the probabilities of different emission
trajectories when your single largest uncertainty is between
different emissions pathways – that's really unsatisfying and not
what we need for adaptation. We want to be able to put
probability bounds over there in order to support much better
adaptation decision-making. That was part of the motivation.
David Roberts:
When you're choosing these variables, these feedback loops that
you're trying to include in the model, where do you start? Where
do you look? Is there existing literature or existing loops you
can adapt and put in, or are you just starting with a blank piece
of paper?
Fran Moore:
It was definitely a process. One of the starting points was the
observation that we do want to be focused on feedback loops. This
is a certain style of modeling, sometimes called system dynamics
modeling, where you're focused on the coupling between different
feedback loops because they tend to be really important in
driving the dynamics of the system over long time periods. If you
have reinforcing feedbacks, particularly if they're coupled to
each other, you can get very complex, nonlinear behavior emerging
from the model. We wanted to make sure that we were allowing for
that, so we did have a focus on trying to identify the feedback
loops.
Essentially, an interdisciplinary team of people started
brainstorming, based on our knowledge, what theories around
psychology, social psychology, sociology, and political science
might be relevant here. Then we did a literature review across
different potential feedback loops, looking for evidence within a
really diverse range of literatures about dynamics that might be
relevant to the system, that we could take and incorporate into
this model.
David Roberts:
Give us an example of how a change in one thing might trigger a
change in another thing that might trigger a change in policy.
Fran Moore:
One that we incorporate into our emissions or energy component is
learning-by-doing feedback; it’s represented in a lot of
energy-system models these days. This is a phenomenon where new
technologies tend to be really expensive, but you tend to get
cost reductions with increased deployment, so your technology
gets cheaper, so it gets deployed more, so it gets cheaper, so it
gets deployed more. That's a reinforcing feedback where there's
quite a lot of evidence across different energy technologies
about how large that effect tends to be.
Some of the ones where the evidence is more qualitative or
perhaps more debatable would be things like, we have a feedback
from policy change to public opinion. This is the idea of the
normative force of law or expressive force of law, which is
described in some legal literature. It's the idea that policy
change itself can signal to people what is desirable behavior or
desirable outcomes, so you can get this reinforcing feedback
where you get some change in the law that later drives public
opinion in that direction because it's signaling something.
That's the kind of feedback that we allow for in the model.
David Roberts:
The learning curves are a socioeconomic process, but there's tons
of data on them; they’re very well-understood and well-modeled
and quantified. I can imagine getting those in the model
relatively smoothly. But something like the extent to which
passing a policy serves as a social proof that then shifts public
opinion, which then makes the next policy slightly more likely –
I can conceptualize that loop easily, I understand what it means
on a qualitative level, but how do you begin to quantify that?
What are the data sources that would even feed into that?
Fran Moore:
This is an issue that we ran into in designing the model, is that
a lot of the evidence here is coming from more qualitative
disciplines like legal literature and political science
literature. That doesn't mean it's not evidence; in some cases,
we have quite rich case studies showing some of these feedbacks
in operation. But it does make it challenging when you're trying
to take that and put an equation on it.
One thing is that we allowed for a lot of uncertainty. In our
final set of runs, we sample over a lot of uncertain parameters
in the model and we try and say, given the fact that we don't
know a lot about this particular parameter that describes the
strength of the feedback, or even the existence of this feedback,
what can we say probabilistically about where emissions might go?
The other thing that we do is a hind-cost exercise to jointly
constrain these parameters. Even though we don't have data that
is allowing us to say “this feedback in particular,” we can take
a subset of the model – in this case, I think it was our opinion,
policy, adoption, and cognition modules – and we can start it in
the past. I think 2010 was when we first had data for
distribution of public opinion as well as carbon pricing. Then we
can run the model forward using, again, sampling over a very
large set of the parameter space, and look at how well that
evolution of opinion and that evolution of policy actually
matched what happened over 10 years. Based on the match under
different parameter combinations, we can probabilistically say
“this set of parameter combinations is more likely true than this
set of parameter combinations” just because it seems like it
generates a better match in the model over the last 10
years.
We do two versions of that for different parts of the model,
these hind-cost parameter-constraint exercises, and that's
primarily how our empirical evidence comes into the model. It
would be great if we could use other data from other fields to
constrain some of these parameters more precisely, but for some
of these ideas, that doesn't exist at the moment.
David Roberts:
One of the things that's done with climate physical models to
test them out is, as you say, backcast – meaning if we went back
in time and used this model, would it accurately predict what
actually happened? Do you think a model like this, with social
features, some of which are fuzzier than others, could ever
accurately backcast? What did you find when you backcasted? Are
you comfortable that you have a set of feedback loops now that at
least accurately captured the last 10 years?
Fran Moore:
It is tricky in that some of the feedback loops play out.
Ideally, we would have much better historical data on some of
these social measures that allow us to go back much further.
Because we only have data over about 10 years, and it might even
be less than that, we're able to say that over this relatively
short time period, the model seems like it’s not going completely
crazy. But part of the goal of incorporating feedback is so that
you have the potential for things like tipping points and things
like that, so you don't want to over-constrain.
Sometimes you see critiques of energy models that they
over-constrain in order to precisely match historically what's
happened; but if some of those constraints can change in the
future, and we're projecting out a long way here, then you want
to allow for that to happen too. So it's a balance between those:
what do we have evidence for, in a broad sense of the word, in
terms of the structure of the feedback; as well as, can we use
the evidence as we have it to constrain it?
There's uncertainty, and we can get a wide range of behavior, but
more or less it can track this gradual expansion of support for
climate policy in OECD countries, which is our focus, as well as
relatively slow increase in average carbon price, which is our
measure of policy. Between those two things, they can constrain
some of the parameters in the model, but not all of them.
David Roberts:
When it comes to social and political stuff regarding climate
change, by definition, there aren’t data sets going back a long
way, because the issue itself is relatively new to society and
politics – a couple of decades, which in modeling terms is a
relatively short period of time. So what data do we have? Of all
the kajillions of social and political factors you might imagine
trying to get in here, do some have data available and some
don't? Do you end up biasing yourself toward factors where there
is data available just because there is data, and overlooking
things that might be important because there is no data?
Fran Moore:
On that latter question, because we built into the model the
potential for the feedback loops where we don't necessarily have
strong quantitative data, we're deliberately trying to avoid that
problem. We're allowing those feedback loops to operate in
probabilistically. We can't constrain them directly with data, we
recognize that; there are only limited model outputs and
parameters that we can actually match to stuff that’s measurable
in a defensible way. But that doesn't mean that we don't include
them in our model. We still allow for those effects to operate,
because they're potentially really important in driving the
dynamics, and just because we don't measure them super well
doesn't mean that they shouldn't be in there.
In terms of the exact data, what's important is to have repeated
data on repeated measures over time because that helps you
constrain these dynamic systems. That is tough, because you have
opinion surveys that’ll be for one country in one year and a
different country in a different year, or the question
changes.
The Yale Program on Climate Change Communication has something
for the US, so originally we used that. Then we wanted to be
representative of more countries, so we used a Pew question that
has been asked repeatedly across about nine OECD countries since
about 2010. I don't think they do it every year, but it gives us
a time series of how opinion is shifting on average across these
countries.
We have two other measures. One is on policy, so that measures
carbon pricing. That’s fairly straightforward – well, it’s kind
of straightforward.
David Roberts:
I was going to ask about it, because explicit carbon pricing
policies are a very small fraction of total climate policy. Are
you taking all those other climate policies and trying to
translate them into an implicit carbon price, or are you just
looking at explicit carbon pricing?
Fran Moore:
That is exactly the caveat I was about to add. Ideally, we would
like to do exactly what you said, which is take all these climate
policies around the world that have some associated shadow cost
and that can be quantified in terms of effective carbon price at
the margin, and add it all up. We just can't do it. That has not
been done by other people. So instead, we use just explicit
carbon pricing: cap-and-trade systems and carbon taxes,
essentially. Those are pretty well-documented.
David Roberts:
Don't you worry that you're only capturing a fraction of policy?
How do you compensate for that?
Fran Moore:
The important question is, do we get the change right? We're not
able to say more than that. We can say we seem like we're
matching the rate of change of explicit carbon pricing.
How that matches up to other measures that would include things
like renewable portfolio standards and so on is not clear, but
those get complicated too, because there’s a bunch of reasons why
you might do them. It's not just carbon – things like CAFE
standards have a climate component, but they've also got air
pollution and fuel economy and saving people money at the gas
tank and all those things. It'd be great if someone else wanted
to come up with the shadow cost of all these different
regulations; we would definitely use it.
David Roberts:
One of the significant types of findings that might come out of a
model like this is, given the current social-political
trajectory, when might we see some sort of tipping point when the
gradual build flops over into sudden action? Even physical
tipping points are incredibly hard to pin down because of
emergent effects that are difficult to predict from initial
circumstances; my intuition is that social tipping points would
be even more difficult to predict. Do you get any firm
predictions about tipping points out of this model, and how
confident are you?
Fran Moore:
Part of the reason we built a model like this was this idea that
you can get these tipping-like, nonlinear behaviors in the
social, political, and technical systems that produce emissions.
David Roberts:
If you look back on history, most progress comes out of something
like that punctuated equilibrium model – things were the same for
a while, then whoosh, a bunch of stuff changes. Actual policy
history does not do these gently, upwardly sloping lines that you
see in models so often.
Fran Moore:
The first step in trying to understand that is to actually have a
model that can generate those. Our model can definitely do that.
To some extent, it was designed to do that; we went around
looking for these feedback loops and we coupled them all
together, and that's inherently going to be a system that under
certain conditions is going to give you this tipping-style
behavior.
The reason why some modeling communities don't like this type of
modeling is exactly that reason, that you allow for this complex,
rapidly changing, accelerating behavior under certain conditions.
It’s not necessarily super well constrained what the future looks
like, because you're allowing for rapid changes in ways that are
going to produce futures that maybe we can't really imagine right
now. When we look out we tend to extrapolate from the trajectory
we're on, rather than accounting for the accelerating feedbacks
that we're capturing here.
David Roberts:
We don't necessarily know what public opinion will do in response
to literally unprecedented conditions.
Fran Moore:
Yeah. So the goal is to try and draw on the theory that we have
already in the social and political sciences and put them
together.
David Roberts:
Your paper mentions trying to learn from past episodes of rapid
social change, previous tipping points. Can you pull durable
lessons out of those past examples?
Fran Moore:
There are other strategies that might do a case study example of
past social change. Here we're trying to abstract from that a
little further and say, what are the underlying dynamics and more
fundamental prophecies that revolve around things like social
networks and information and political institutions and
power?
If you recognize the uncertainty, and that's what we're doing
with our 100,000 runs, then the other thing you can do is query
the model – to say, what combinations of parameters put us in a
world where we get positive rapid transformation and what sets of
parameters put us in a world that we don't? You can start to ask
those types of questions.
These models with tipping points are not necessarily fully
predictive; that's not necessarily what they're trying to do, in
the sense of “we're going to have a tipping point in 2042.” But
they're still informative about the system, and they're still
potentially informative to management of that system.
David Roberts:
You're just constraining the field of possible outcomes. You
don't have to get to a single prediction to be helpful.
Fran Moore:
That's really important. People are nervous of this style of
models sometimes, because you can get rapid changes that maybe
make us a little uncomfortable about making predictions like
that.
David Roberts:
Right, you don't want to bet on those things.
Fran Moore:
But we're actually able to constrain the set of, say, 2100
temperatures compared to looking at the range of representative
concentration pathways and saying “well, it could be 8.5 and it
could be 2.6 and we just can't put probabilities over those.”
That is a huge range. We can really say “both those ends are
pretty unlikely, and probably we’re somewhere in the middle.”
David Roberts:
The balance of your model runs that try to capture sociopolitical
processes end up with lower emissions than business as usual,
which I take to mean that on balance, these social-political
feedback loops are moving us in a positive direction. How do we
know there won't be loops pushing in the other direction? One of
the big things people are talking about these days is the looming
possibility of eco-fascism, where climate impacts cause people to
get into a lifeboat mentality and build walls and hoard the rest
of their fossil fuels. You can imagine feedback loops pushing us
in the wrong direction. How do you think about the possibility of
negative loops?
Fran Moore:
If I was going to expand the model further, we would maybe pay
more attention to those types of potential negative feedback
loops, or balancing feedback loops. It’s fair to say we built a
model to be tippy because we were looking for tipping points and
we wanted to make sure we had the potential to capture them, and
we definitely do. But thinking a bit more about what some of
these balancing effects might be and how they might slow down
that tippiness is a way we would want to expand the model.
The one you talked about is definitely one we thought about, this
idea that with mitigation, you're trying to provide a global
public good, and that’s difficult at the best of times. Maybe as
things get worse or are perceived to be getting worse, it becomes
more and more difficult to provide global public goods, and
instead, maybe we would focus on much more local public goods, or
no public goods at all, and switch more into an adaptation focus.
That's definitely a dynamic that you could imagine playing out,
and that could potentially have some effects on the model,
depending exactly how it was parameterized.
The other important balancing feedback that we don't have in
there at the moment is reaction against carbon pricing. That
probably is important given that you can have higher energy
prices, but you cannot have them quickly. If you're raising
carbon prices very, very quickly, you're probably going to get
negative reactions to that and public opinion that will slow that
down. We could definitely incorporate more of that into the
model.
David Roberts:
It's on my mind these days, because I look around the world and
it seems like reactionary backlash against progressive movement
is very real.
Fran Moore:
We're clearly also no longer in a business-as-usual world. We
have carbon policies in many countries, and we have accelerating
reductions in the cost of energy technologies. What this model
gets is spillover effects where you can drive down reduction in
cost with just a little bit of policy. So if you do have
reinforcing feedback loops, you don't necessarily need really
fast climate policy to get some big reductions,
potentially.
By acknowledging that you can have accelerations in directions
you're not necessarily focused on, it can show where there are
positive places; and clearly, things are stuck in some places as
well at the moment.
David Roberts:
The social cost of carbon is an attempt to put a number on the
total economic damages wrought by a ton of carbon emissions. It’s
useful, for example, if you're going to make climate policy; you
need to know on some level how much it hurts to emit a ton of
carbon so you can calibrate your cost-benefit analysis and
whatever else. In trying to capture all the damages, you are
inevitably getting into difficult-to-quantify areas like the
worth of a species, the value of intact ecosystems, the value of
a human life. The decisions you make on these fuzzy variables
have practical real-world effects insofar as they show up in the
social cost of carbon, so it matters quite a bit how you quantify
these things.
There's been a lot of critique of the social cost of carbon
lately on a couple of measures. One is that by the time you make
all these value judgments, by the end, it's faux precision. The
other general line of critique is that because certain things are
so much easier to quantify than others, those are more likely to
be incorporated in the social cost of carbon, and the things that
are difficult to quantify tend to be on the damage side, so by
restricting your vision to what you can quantify, you are
undercounting the damages.
I'd love to hear you talk a little bit about the social cost of
carbon and how you balance this.
Fran Moore:
Your point about undercounting is definitely true. There are
effects of climate change that probably we are always going to be
unable to put dollar values on: things like effects on conflict
risk, loss of cultural heritage, migration.
In my head, it's always helpful to distinguish between the social
cost of carbon – the number that we come up with that is legally
defensible and can survive as it’s dragged through the courts,
which is done as soon as the US government comes out with
whatever number it's coming out with; we need to be able to go
into court, to show defensively that this number came from sound
scientific and economic processes that were transparent and other
people can agree with them – and then the actual costs of climate
change that are potentially and probably unboundedly large above
that.
Those two things are not the same thing. But we can do the best
job we can at the former, getting it as comprehensive and
up-to-date and with as sound science as we can. Why wouldn't we
do that? We spend an awful lot of time and money documenting
climate change impacts, and the only formal way in which those
get into considerations of climate policy and US regulatory
analysis is via something like the social cost of carbon. It
seems somewhat crazy to me that we would do a lot of this work on
documenting what climate change impacts are and not make that
final step of actually trying to incorporate it into regulatory
analysis, as and when we can.
David Roberts:
Even given all the uncertainties and fuzziness, it's better to
have a number than not have a number.
Fran Moore:
Everyone is very willing to give you numbers on how costly
climate policy is going to be, and how many jobs it’s going to
cost, and how much it’s going to raise energy prices. It seems
pretty important to have on the other side of that some well-done
accounting on what we’re getting for this. Those numbers don't
have to just be in dollar terms, which is what the social cost of
carbon does. But that is the language in which a lot of policy
operates. You're fighting from a losing position if you're not
able to provide that measure of the benefits of these kinds of
policies.
As a legitimate critique of exactly how this modeling has been
done over the last 30 or so years, it's definitely fair to say
these models got stuck at a certain place, in particular in terms
of representing what we know about climate change damages. They
were really not where we needed them to be to have real
confidence that they're telling us about what we know about
climate change damages.
But there's been a lot of work to fix that, some of which I've
contributed to over the last 10 years. The US government is in
the process of updating this number, and I think you'll see a lot
of those benefits being reflected in the revised versions.
David Roberts:
Every critique I've ever heard of it from climate scientists says
it's too low. It makes me think that there might be some danger
in having this too-low figure getting stuck in practice.
Fran Moore:
It's good to recognize that it misses stuff. But also, if we had
a global carbon tax of $50 per ton, which is what the current
number is, we would be in a totally different place than we are
right now. Maybe it's low, but even if we just took what we're
currently counting seriously as a guide to policy, we'd be in a
really different place.
It is not the fault of these models that we're not in that place.
The models have been saying for a long time that the costs of
climate change are real and are positive in the sense that we
should be doing something about climate change. Then we get into
debates around “is it high enough to justify 2°” or whatever, but
that's not the place we're in right now in the policy
sphere.
David Roberts:
Another critique is that a lot of the key variables that produce
the social cost of carbon are, at root, value questions. The
famous example here is discount rates – how much do we value
future costs and benefits relative to present day costs and
benefits? There's a long literature of people arguing over what
the right discount rate is, and in the end, there's no empirical
way to resolve that argument. Ultimately, you are making a value
judgment about how much we value the future. And what you decide
that figure is absolutely shapes, in a very fundamental way, the
values that you end up with.
Do you ever worry that we ought to be having that debate over
values out in the open? In terms of values, do you worry that
putting a precise number on it obscures the fact that there's a
values debate at all?
Fran Moore:
As you might have gathered, I tend to take a more practical view
of the matter rather than get into philosophical debates. When
people say “my personal value judgment is this” that's fine, and
you can plug that into the FTC models and calculate what that
does to the FCC, but as an input into regulatory analysis, the
way in which we carry out these values debates is through
government, through political engagement. When the EPA and the
interagency working group come up with the social cost of carbon,
they are applying these discount rates, which do represent
something about how we're going to value the future under various
different epistemic arguments, and that's part of our democratic
decision-making process. It’s not divorced from that. Just
because there are values involved doesn't mean that it's not
something that belongs in policy, because policy is a
representation of our values. So I don't really see the tension
there, in terms of how it's actually applied.
David Roberts:
You wrote a recent comment in Nature with Zeke Hausfather that
comes from a very different direction than your paper about the
social determinants of climate change, but arrives at a very
similar destination. Can you explain what that comment was about
and the research it was describing?
Fran Moore:
This was an accompanying comment on a recent paper by Malte
Meinshausen, which looked at what countries have pledged for
their net zero commitments. In that paper, they add them all up,
estimate what that does in terms of emissions, and show that if
fully realized, these long-term pledges get us really close to
that 2° Paris Agreement target.
David Roberts:
That's a very big deal. It doesn’t seem like that news has really
gotten out yet.
Fran Moore:
I agree. When I give talks, I try to make sure to say that we're
making progress. We're bending things. One thing economists often
think about is that expectations are important; if businesses and
investors and planners expect things to be going in a certain
direction, then the capital allocations will flow accordingly.
They bring themselves into actuality in some ways, although not
fully, obviously.
What we did in our comment on this paper, and I have to give Zeke
the vast majority of the credit for this, was to pull together
not just the current Meinshausen study but also a number of other
papers, including my paper that we've been talking about, that
have also tried to look at probabilities of temperature outcomes
under different emission scenarios by 2100.
There are a number of different approaches: some of those might
just look at the effects of current policy; some might look at
2030 pledges; some are more fully probabilistic, like there's a
recent study out of Resources for the Future that does some
various expert elicitations combined with statistical modeling
work to look at distributions of emissions and temperatures.
Collectively, they do provide a much tighter temperature bound
than if you were to just look at the range of, say, RCP
scenarios.
David Roberts:
When you gather all these models up and average them out, what
range of temperature can we reasonably say we are headed toward
now?
Fran Moore:
We find, and it matches what the other studies have found using
very different methods, that we put a lot of probability math in
this range between 2° and 3°. That can definitely go up,
particularly on the high end, based on uncertainties in the
climate system – so if we're more unlucky on carbon cycle
feedbacks, or on what the climate sensitivity looks like, we
could definitely be above 3°, even getting toward 4°. But the
probability math right now is 2°-ish on the low end – maybe below
that, depending on what happens with carbon capture, say – and
then between 2° and 3°, essentially.
David Roberts:
Ten or 20 years ago, 4° or 5° or 6°, even 8° were on the table. I
don't know if there's common agreement on this, but I think once
you're getting up above 4°, that's where you get into “does
advanced human civilization persist?” type of questions, whereas
between 2° and 3° is bad, but potentially non-catastrophic.
There was an IPCC paper that talked about the difference between
1.5° and 2° in very helpful, clear terms; I feel like we need
that same thing for the gradations between 2° and 3°, because
that now looks like where we're going. How are we supposed to
feel about this, Fran? Are we supposed to be optimistic? happy?
still filled with dread? What does “between 2° and 3°” mean?
Fran Moore:
I think you have an appropriately nuanced and mixed set of
feelings about this. The impacts we've seen so far, the extreme
events – heat extremes, rainfall intensity extremes – that have
even taken climate scientists by surprise are certainly enough, I
think, to worry about at this range of 2° to 3°. But obviously,
there would be an awful lot more to worry about if we saw we were
getting up to 4° and 5° of warming by 2100. The fact that we can,
with increasing confidence, start to rule out the really extreme
rates of temperature increase is definitely good news. But
there's plenty to worry about at this more moderate range of
warming as well.
David Roberts:
It’s such a complicated thing to explain to a public.
Fran Moore:
That's why things like the social cost of carbon can be really
helpful. It's designed to think about the margin – for that
additional ton of carbon dioxide, how bad is it? You don't have
to say “climate change is a disaster” or “it’s solved” – we're
always going to be at this margin of “should we do more?” The
social cost of carbon can help you balance that, recognizing that
it's uncertain and there's a lot missing from it. In those
real-world cases, where we’re in the middle somewhere and we
should probably do more – but how much more? – it does help
you.
The other point is that, given this increasing sense from various
directions in the literature that we can narrow down this range
of warming, that should be informing what our climate modeling
looks like and what these climate impact studies are doing. We
have a lot that look at RCP 8.5, which we think is probably quite
unlikely now; we have a lot that look at lower levels of warming;
and we need something more in-between if we're going to start
providing serious advice to planners and governments about
adaptation.
David Roberts:
One of the big debates in climate science is how to treat what
are called “tail risks” – ends of the spectrum where you have low
probability but extremely high-impact possibilities. Martin
Weitzman's work famously made the case that we’re misleading
ourselves when we make policy based on the middle of the bell
curve; we need to be making policy based on foreclosing these
risks, because even if it's a small risk, the catastrophe would
be so complete that in a sense, it’s worth almost anything to
avoid it.
In the context of that argument, it looks like our modeling is
reducing those tails, at the very least. So how should we think
about tail risks? Is the possibility of 4° or higher low enough
at this point that I, as an average citizen, should breathe a
sigh of relief? Or is it still high enough that it activates
these Weitzman-y do-anything-to-avoid-it reactions?
Fran Moore:
When I think about Weitzman’s writing on things like fat tails
and the more catastrophic end of climate damages, the importance
there is on the distribution of damages. That is both about
emissions and what the climate system does, but in my looking at
these systems and what drives damages and models like the social
cost of carbon, what's even more important is, what does
temperature do to human society and the things we care about? The
tails on that, I think, are really large. That is not super
well-constrained. You can get quite heavy probability mass at
some quite large damages at moderate levels of warming under
plausible scenarios of just how sensitive human systems are to
changes in climate.
Even if we think we're narrowing in the temperature range, that’s
not giving us a huge amount of confidence in that we’re not
necessarily narrowing in on constraining the damages, because the
uncertainty bounds on those are still really enormous – for some
people. These are not distributed equitably. There are going to
be catastrophic consequences of this level of warming for some
communities, perhaps many communities. So when we look at that
distribution, we don't treat it as just a central estimate. We do
look at a full uncertainty and that uncertainty is large. That
right tail does pull out the mean.
The question of how exactly that translates into policy is,
again, a values question. How much you weight these unlikely but
very bad outcomes is essentially a question of risk aversion and
preferences over risk, in the same way that the discount rate is
about preferences over time. That's something that can operate
through the political system as well. Just trying to keep that
uncertainty and that full distribution in the regulatory analysis
as far as possible is good, although those processes do tend to
be relatively adverse to uncertainty.
David Roberts:
To summarize: the work we've done so far to address climate
change and the work we’ve done so far in climate modeling has
somewhat narrowed the possible range of outcomes, so there's some
comfort to take in that; but on the flip side, the remaining
uncertainty about damages to society and at least the possibility
of truly large and catastrophic damage to society are still very
much there, so there's no reason to reduce our sense of urgency
about policy. Is that fair?
Fran Moore:
Yes, I think that's true. If you look at even just the social
cost of carbon we have right now, we're so far short of it. Even
that by itself, you don't even need to get to a fat tail; we
should definitely be doing more than we're doing right now on a
purely cost-benefit basis. We're definitely in a place where
we're going to get benefits by doing more. Once we do a lot more,
we can argue about that margin, but right now, the net benefits
are definitely in terms of more ambition.
David Roberts:
Well, that seems like a great place to close. Thanks so much for
coming on. And thanks for all your research.
Fran Moore:
Thanks so much for the great questions.
This is a public episode. If you'd like to discuss this with other
subscribers or get access to bonus episodes, visit
www.volts.wtf/subscribe
Weitere Episoden
1 Stunde 15 Minuten
vor 1 Monat
1 Stunde 27 Minuten
vor 1 Monat
1 Stunde 15 Minuten
vor 1 Monat
1 Stunde 4 Minuten
vor 1 Monat
1 Stunde 9 Minuten
vor 2 Monaten
Kommentare (0)
Melde Dich an, um einen Kommentar zu schreiben.