Podcast
Podcaster
A newsletter, podcast, & community focused on the technology, politics, and policy of decarbonization. In your inbox once or twice a week.
Beschreibung
vor 3 Jahren
British researcher Erica Thompson’s recently published book is a
thorough critique of the world of mathematical modeling. In this
episode, she discusses the limitations of models, the role of
human judgment, and how climate modeling could be improved.
transcript)
(Active
transcript)
Text transcript:
David Roberts
Everyone who's followed climate change for any length of time is
familiar with the central role that complex mathematical models
play in climate science and politics. Models give us predictions
about how much the Earth's atmosphere will warm and how much it
will cost to prevent or adapt to that warming.
British researcher Erica Thompson has been thinking about the
uses and misuse of mathematical modeling for years, and she has
just come out with an absorbing and thought-provoking new book on
the subject called Escape from Model Land: How Mathematical
Models Can Lead Us Astray and What We Can Do About It.
More than anything, it is an extended plea for epistemological
humility — a proper appreciation of the intrinsic limitations of
modeling, the deep uncertainties that can never be eliminated,
and the ineradicable role of human judgment in interpreting model
results and applying them to the real world.
As Volts listeners know, my favorite kind of book takes a set of
my vague intuitions and theories and lays them out in a cogent,
well-researched argument. One does love having one's priors
confirmed! I wrote critiques of climate modeling at Vox and even
way back at Grist — it's been a persistent interest of mine — but
Thompson's book lays out a full, rich account of what models can
and can't help us do, and how we can put them to better use.
I was thrilled to talk with her about some of her critiques of
models and how they apply to climate modeling, among many other
things. This is a long one! But a good one, I think. Settle in.
Alright, then, with no further ado, Erica Thompson, welcome to
Volts. Thank you so much for coming.
Erica Thompson
Hi. Great to be here.
David Roberts
I loved your book, and I'm so glad you wrote it. I just want to
start there.
Erica Thompson
That's great. Thank you. Good to hear.
David Roberts
Way, way back in the Mesozoic era, when I was a young writer at a
tiny little publication called Grist—this would have been like
2005, I think—one of the first things I wrote that really kind of
blew up and became popular was, bizarrely, a long piece about
discount rates and their role in climate models. And the whole
point of that post was, this is clearly a dispute over values.
This is an ethical dispute that is happening under cover of
science. And if we're going to have these ethical judgments so
influential in our world, we should drag them out into the light
and have those disputes in public with some democratic input.
And for whatever reason, people love that post. I still hear
about that post to this day. So, all of which is just to say, I
have a long-standing interest in this and models and how we use
them, and I think there's more public interest in this than you
might think. So, that's all preface. I'm not here to do a
soliloquy about how much I loved your book. Let's start with just
briefly about your background. Were you in another field and kept
running across models and then started thinking about how they
work? Or were you always intending to study models directly? How
did you end up here?
Erica Thompson
Yeah, okay. So, I mean, my background is maths and physics. And
after studying that at university, I went to do a PhD, and that
was in climate change physics. So climate science about North
Atlantic storms. And the first thing I did—as you do—was a
literature review about what would happen to North Atlantic
storms given climate change, more CO2 in the atmosphere. And so
you look at models for that. And so, I started looking at the
models, and I looked at them, and this was sort of 10-15 years
ago now—and certainly there's more consensus now—but at that
time, it was really the case that you could find models doing
almost anything with North American storms.
You could find one saying... the storm tracks would move north,
they'd move south, they'd get stronger, they'd get weaker, they'd
be more intense storms, less intense storms. And they didn't even
agree within their own aerobars. And that was what really stuck
out to me, was that, actually, because these distributions
weren't even overlapping, it wasn't telling me very much at all
about North Atlantic storms, but it was telling me a great deal
about models and the way that we use models. And so I got really
interested in how we make inferences from models. How do we
construct ranges and uncertainty ranges from model output? What
should we do with it? What does it even mean? And then I've kind
of gone from there into looking at models in a series of other
contexts. And the book sort of brings together those thoughts
into what I hope is a more cohesive argument about the use of
models.
David Roberts
Yeah, it's a real rabbit hole. It goes deep. The book is focusing
specifically on mathematical models, these sort of complex models
that you see today in the financial system and the climate
system. But the term "model" itself, let's just start with that
because I'm not sure everybody's clear on just what that means.
And you have a very sort of capacious definition.
Erica Thompson
I do, yeah.
David Roberts
...of what a model is. So just maybe let's start there.
Erica Thompson
Yeah. So, I mean, I suppose the models that I'm talking about
mostly, when I'm talking in the book, is about complex models
where we're trying to predict something that's going to happen in
the future. So whether that's climate models, weather models—the
weather forecast is a good example—economic forecasts, business
forecasting, pandemic and public health forecasting are ones that
we've all been gruesomely familiar with over the last few years.
So those are kind of the one end of a spectrum of models, and
they are the sort of big, complex, beast-end of the spectrum. But
I also include, in my idea of models, I would include much
simpler ones, kind of an Excel spreadsheet or even just a few
equations written down on a piece of paper where you say, "I'm
trying to sort of describe the universe in some way by making
this model and writing this down."
But also I would go further than that, and I would say that any
representation is a model insofar as it goes. And so that could
include a map or a photograph or a piece of fiction—even if we go
a bit more speculative—fiction or descriptions. These are models
as metaphors. We're making a metaphor in order to understand a
situation. And so while the sort of mathematical end of my
argument is directed more at the big, complex models, the
conceptual side of the argument, I think, applies all the way
along.
David Roberts
Right, and you could say—in regard to mathematical models—some of
the points you make are you can't gather all the data. You have
to make decisions about which data are important, which to
prioritize. So the model is necessarily a simplified form of
reality. I mean, you could say the same thing about sort of the
human senses and human cognitive machinery, right? Like, we're
surrounded by data. We're constantly filtering and doing that
based on models. So you really could say it's models all the way
down.
Erica Thompson
Yes.
David Roberts
Which I'm going to return to later. But I just wanted to lay that
foundation.
So in terms of these big mathematical models, I think one good
distinction to start with—because you come back to it over and
over throughout the book—is this distinction between uncertainty
within the model. So a model says this outcome is 60% likely,
right? So there's like a certain degree of uncertainty about the
claims in the model itself. And then there's uncertainty, sort of
extrinsic to the model, about the model itself, whether the model
itself is structured so as to do what you want it to do, right?
Whether the model is getting at what you want to get at.
And those two kinds of uncertainty map somehow onto the terms
"risk" and "uncertainty."
Erica Thompson
Somehow, yes.
David Roberts
I'm not totally sure I followed that. So maybe just talk about
those two different kinds of risks and how they get talked about.
Erica Thompson
So I could start with "risk" and "uncertainty" because the
easiest way to sort of dispatch that one is to say that people
use these terms completely inconsistently. And you can find in
economics and physics, "risk" and "uncertainty" are used
effectively in completely the opposite meaning.
David Roberts
Oh, great.
Erica Thompson
But generally one meaning of these two terms is to talk about
"uncertainty," which is, in principle, quantifiable, and the
other one is "uncertainty," which perhaps isn't quantifiable. And
so in my terms, in terms of the book, so I sort of conceptualize
this idea of "model land" as being where we are when we are sort
of inside the model, when all of the assumptions work, everything
is kind of neat and tidy.
You've made your assumptions and that's where you are. And you
just run your model and you get an answer. And so within "model
land," there are some kind of uncertainties that we can quantify.
We can take different initial conditions and we can run them, or
we can sort of squash the model in different directions and run
it multiple times and get different answers and different ranges
and maybe draw probability distributions.
But actually, nobody makes a model for the sake of understanding
"model land." What we want to do is to inform decision making in
the real world. And so, what I'm really interested in is how you
take your information from a model and use it to make a statement
about the real world. And that turns out to be incredibly
difficult and actually much more conceptually difficult than
maybe you might first assume. So you could start with data and
you could say, "Well, if I have lots of previous data, then I can
build up a statistical picture of how good this model is,"
whether it's going to be any good.
And so you might think of the models and the equations that sent
astronauts to the moon and back. Those were incredibly good and
incredibly successful. And many models are incredibly successful.
They underpin the modern world. But these are essentially what I
call "interpolatory models." They're basically...they're trying
to do something where we have got lots of data and we expect that
the data that we have are directly relevant for understanding
whether the predictions in the future are going to be any good.
David Roberts
Right.
Erica Thompson
Whereas when you come to something like climate change, for
example, or you come to any kind of forecasting of a social
system, you know that the underlying conditions are changing, the
people are changing, the politics are changing, even with the
physics of the climate, the underlying physical laws, we hope,
are staying the same. But the relationships that existed and that
were calibrated when the Arctic was full of sea ice, for example,
what do we have to go on to decide that they're going to be
relevant when the Arctic is not full of sea ice anymore? And so
we rely much more on expert judgment. And at that point, then you
get into a whole rabbit hole of, well, what do we mean by expert
judgment?
And maybe we'll come on to some of these themes later in the
discussion, but these ideas of trust. So how are we going to
assess that uncertainty and make that leap from model land back
into the real world? It becomes really interesting and really
difficult and also really socially, sort of, dependent on the
modeler and the society that the model is in.
David Roberts
Right, it's fraught at every level. And one of the things that I
really got from your book is that it's really, really far from
straightforward to judge a model's quality. Like, you talk
about... what is the term, a horse model? Based on the guy who
used to make hand gestures at the horse, and the horse looked
like it was doing addition, looks like it was doing math, but it
turns out the horse was doing something else entirely. And so it
only worked in that particular situation. If you took the horse
out of that situation, it would no longer be doing math.
Erica Thompson
And I think what's interesting is that the handler wouldn't even
have realized that. That it wasn't a deliberate attempt to
deceive, it was the horse sort of picking up subconsciously or
subliminally on the movement and the body language of the handler
to get the right answer.
David Roberts
Right. Well, this is for listeners, this is kind of a show that
this guy used to do. He would give his horse arithmetic problems
and the horse would tap its foot and get the arithmetic right and
everybody was amazed. And so your point is just you can have a
model that looks like it's doing what you want it to do, looks
like it's predictive, in the face of a particular data set, but
you don't know a priori whether it will perform equally well if
you bring in other data sets or emphasize other data sets or find
new data. So even past performance is not any kind of guarantee,
right?
Erica Thompson
Yeah. And so it's this idea of whether we're getting the right
answer for the right reasons or the right answer for the wrong
reasons. And then that intersects with all sorts of debates in AI
and machine learning about explainability and whether we need to
know what it's doing in order to be sure that it's getting the
right answer for the right reasons or whether it doesn't actually
matter. And performance is the only thing that matters.
David Roberts
So let's talk then about judging what's a good and bad model,
because another good point you make, or I think you borrow, is
that the only way to judge a model, basically, is relative to a
purpose. Whether it is adequate to the purpose we're putting it
to, there's no amount of sort of cleanliness of data or like
cleverness of rules. Like nothing in the model itself is going to
tell you whether the model is good. It's only judging a model
relative to what you want to do with it. So say a little bit
about the notion of adequacy to purpose.
Erica Thompson
Yeah. So this idea of adequacy for purpose is one that's really
stressed by a philosopher called Wendy Parker, who's been working
a great deal with climate models. And so, I guess, the thing is
that what metric are you going to use to decide whether your
model is any good? There is no one metric that will tell you
whether this is a good model or a bad model. Because as soon as
you introduce a metric, you're saying what it has to be good at.
I can take a photograph of somebody. Is it a good model of them?
Well, it's great if you want to know what they look like, but
it's not very good if you want to know what their political
opinions are or what they had for dinner. And other models in
exactly the same way. They are designed to do certain things. And
they will represent some elements of a system or a situation
well, and they might represent other elements of that situation
badly or not at all. And not at all doesn't really matter because
it's something that you can't sort of imagine it in. But if it
represents it badly, then it may just be that it's been
calibrated to do something else. So the purpose matters.
And when you have a gigantic model, which might be put to all
manner of different purposes. So a climate model, for example,
could be used by any number of different kinds of decision
makers. So the question, "Is it a good model?" Well, it depends
whether you are an international negotiator deciding what carbon
emissions should be or whether you're a subsistence farmer in
Sub-Saharan Africa or whether you're a city mayor who wants to
decide whether to invest in a certain sort of infrastructure
development or something or whether you're a multinational
insurance company with a portfolio of risks. You will use it in
completely different ways.
And the question of whether it is any good doesn't really make
sense. The question is whether it is adequate for these different
purposes of informing completely different kinds of decisions.
David Roberts
Right, or even if you're just thinking about mitigation versus
adaptation, it occurs to me, different models might work better
for those things. I guess the naive thing to think is, if you
find one that's working well for your purpose that means it is
more closely corresponding to reality than another model that
doesn't work as well for your purpose. But, really, we don't know
that. There's just no way to step outside and get a view of it
relative to reality and ever really know that.
Erica Thompson
Yeah and reality kind of has infinitely many dimensions so it
doesn't really make sense to say that it's closer. I mean, it can
absolutely be closer on the dimensions that you decide and you
specify. But to say that it is absolutely closer, I think,
doesn't actually make sense.
David Roberts
Right, yeah. The theme that's running through the book over and
over again is real epistemic humility.
Erica Thompson
Yes, very much so.
David Roberts
Which I think...you could even say it's epistemically humbling
the book. That's sort of the way I felt about it.
Erica Thompson
Great. That's really nice. I'm glad to hear that.
David Roberts
Yeah, at the end, I was like "I thought I didn't know much and
now I'm quite certain I know nothing at all."
Erica Thompson
But not nothing at all. I mean, hopefully, the way it ends is to
say that we don't know nothing at all, we shouldn't be throwing
away the models. They do contain useful information. We've just
got to be really, really careful about how we use it.
David Roberts
Yes, there's a real great quote, actually, that I almost memorize
is, "We know nothing for certain, but we don't know nothing," I
think is the way you put in the book, which I really like. We're
going to get back to that at the end, too. So another sort of
fascinating case study that you mentioned, sort of anecdote that
you mentioned that I thought was really, really revealing about
sort of the necessity of human expert judgment in getting from
the model to the real world is this story about the Challenger
shuttle and the O-rings. The shuttle had flown test flights,
several test flights beforehand using the same O-rings.
Erica Thompson
Yes.
David Roberts
...and had done fine. So there's sort of two ways you can look at
that situation. What one group argued was: "A shuttle with these
kind of O-rings will typically fail. And these successful flights
we've had are basically just luck." Like, we've had several
flights cluster on one side of the distribution, on the tail of
the distribution and we can't rely on that luck to continue. And
the other side said, "No, the fact that we've run all these
successful flights with these O-rings is evidence that the
structural integrity is resilient to these failed O-rings to the
sort of flaws in the O-rings."
And the point of the story was: both those judgments are using
the exact same data and the exact same models. And both judgments
are consonant with all the data and all the models. So, the point
being, no matter how much data you have—and even if people are
looking at the same data and looking at the same models—in the
end, there's that step of judgment at the end. What does it mean
and how does it translate to the real world that you just can't
eliminate, you need, in the end, good judgment.
Erica Thompson
Yeah, exactly. You can always interpret data in different ways
depending on how you feel about the model. And so another example
I give that is along very similar lines is thinking, sort of, if
you were an insurance broker and you'd had somebody come along
and sell you a model about flood insurance or about the
likelihood of flooding. And they said a particular event would be
pretty unlikely. And you use that and you write insurance. And
then the following year, some catastrophic event happens and you
get wiped out. What do you do next? Do you say, "Oh dear. It was
a one-in-a-thousand-year event, what a shame. I'll go straight
back into the same business because now the
one-in-a-thousand-year event has happened."
David Roberts
Right. It's perfectly commensurate with the model.
Erica Thompson
It's perfectly commensurate with the model, exactly. So do I
believe the model and do I continue to act as if the model was
correct or do I take this as evidence that the model was not
correct and throw it out and not go back to their provider and
maybe not write flood insurance anymore?
David Roberts
Right.
Erica Thompson
And those are perfectly...either of those would be reasonable. If
you have a strong confidence in the model, then you would take
option A and if you have low confidence in the model, you take
option B. But those are judgments which are outside of "model
land."
David Roberts
Right, right. Judgments about the model itself. And it just may
be worth adding that, there is no quantity of data or like detail
in a model rules that can ever eliminate that judgment at the end
of the line, basically.
Erica Thompson
Yeah, because you have to get out of "model land." I mean, now
some parts of "model land" are closer to reality than others. So
if we have a model of rolling a dice, right, you expect that to
give you a reliable answer, quantitative. If you have a model of
ballistic motion or they're taking astronauts to the moon and
back, you expect that to be pretty good because you know that
it's good because it's been good in the past. And there is an
element of expert judgment because you're saying that my expert
judgment is that the past performance is a good warrant of future
success here. But that's a relatively small one and one that
people would generally agree on. And then when you go to these
more complex models and you're looking out into extrapolatory
situations, predicting the future and predicting things where the
underlying conditions are changing, then the expert judgment
becomes a much bigger and bigger and bigger part of that.
David Roberts
Yes. And that gets into the distinction between sort of modelers
and experts, which I want to talk about a little bit later, too.
But one more sort of basic concept I wanted to get at is this
notion of performativity, which is to say that models are not
just representing things, they're doing things and they're
affecting how we do things and they're not just sort of giving us
information there, they're giving us what you call a "conviction
narrative." So maybe just talk about performativity and what that
means.
Erica Thompson
Yeah, so the idea of performativity is about the way that the
models are part of the system themselves. So if you think about a
central bank, if they were to create a model which made a
forecast of a deep recession, it would probably immediately
happen because it would destroy the market confidence. So that's
a very strong form of performativity. Thinking about climate
models, of course, we make climate models in order to influence
and to inform climate policy. And climate policy changes the
pathway of future emissions and changes the outcomes that we are
going to get. So, again, the climate model is feeding back on the
climate itself.
And the same, of course, with pandemic models which were widely
criticized for offering worst-case scenarios. But obviously the
whole point of predicting a worst-case scenario isn't to just sit
around twiddling your thumbs and wait for it to come true, but to
do something about it so that it doesn't happen. I suppose,
technically, that would be called "counterperformativity" in the
sense that you're making the prediction, and by making the
prediction, you stop it from coming true.
David Roberts
Exactly. We get back, again, to, like, models can't really model
themselves. It's trying to look at the back of your head in a
mirror, ultimately there's an incompleteness to it. But I found
this notion of a conviction narrative. I found the point really
interesting that in some sense, in a lot of cases, it's probably
better to have a model than to not have one, even if your model
turns out to be incorrect. Talk about that a little bit. Just the
way of the uses of models outside of sort of their strictly kind
of representational informational.
Erica Thompson
Yeah, okay. So I guess thinking about this kind of
performativity, and maybe counterperformativity, of models helps
us to see that they are not just prediction engines. We are not
just modeling for the sake of getting an answer and getting the
right answer. We are doing something, which is much more social
and it's much more to do with understanding and communication and
generating possibilities and understanding scenarios and talking
to other people about them and creating a story around it. And so
that's this idea of a conviction narrative.
And what I've sort of developed in the book is the idea that the
model is helping us to flesh out that conviction narrative. So,
"conviction" because it helps us to gain confidence in a course
of action, a decision in the real world, not in "model land." It
helps us to...and then "narrative" because it helps us to tell a
story. So we're, sort of, telling a story about a decision and a
situation and a set of consequences that flow from that. And in
the process of telling that story and thinking about all the
different things, whatever you happen to have put into your
model, and you're able to represent and you're able to consider
within that, developing that story of what it looks like and
developing a conviction that some particular course of action is
the right one to do, or that you'll be able to live with it, or
that it is something that you can communicate politically and
generate a consensus about.
David Roberts
Right. And very frequently those things are good in and of
themselves, even if they're inaccurate. You talk about some
business research, which found that sort of like businesses with
a plan do better than businesses without a plan. Even sometimes
that the plan, it's not a particularly good plan, just because
having a plan gives you that...just kind of a structured way of
approaching and thinking about something.
Erica Thompson
Yeah. And so maybe this is one of the more controversial bits of
the book, but I talk about, for example, astrology and systems
where if you're a scientist like me, you will say, "Probably
there is no predictive power at all in an astrological forecast
of the future." Okay. Opinions may differ. I personally think
that, essentially, they are random.
David Roberts
I think you're on safe ground here.
Erica Thompson
I think so. Probably with your audience, I am. But the point is
that doesn't make them totally useless. So they can have
genuinely zero value as prediction engines, but still be useful
in terms of helping people to think systematically about possible
outcomes, think about different kinds of futures, think about
negative possibilities as well as positive ones, and put all that
together just into a more systematic framework for considering
options and coming to a course of action.
David Roberts
Right, or think about themselves.
And think about themselves and their own weaknesses and
vulnerabilities as well as strengths. Yeah, absolutely. It gives
you a structure to do that. And I think that is absolutely not to
be underestimated. Because there's sort of those two axes.
There's the utility of prediction, the accuracy of prediction:
"How good is this model as a predictor of the future?" And then,
completely orthogonally to that, there is: "How good is this
model, in terms of the way that it is able to integrate with
decision making procedures? Does it actually help to support good
decision making?" And you can imagine all four quadrants of that.
Erica Thompson
Obviously, we sort of hope that models that are really good at
predicting the future will be really good at helping to support
decision-making. But, ultimately, if it could perfectly predict
the future and it was completely deterministic and it just told
you what was going to happen, that wouldn't be much use either.
You're back into sort of Greek myths and Greek tragedies,
actually being told your future is not that useful. You need to
have some degree of uncertainty in order to be able to have
agency and take action and have the motivation to do anything at
all.
David Roberts
Yeah, so I guess I would say that astrological, astrology
wouldn't have hung around for centuries, despite having zero
predictive power.
Erica Thompson
If somebody didn't find it useful.
David Roberts
Right, if it did not have these other uses. I just thought that
was a little bit sort of tacking the other way from a lot of the
points, a lot of the points you're making in the book about the
sort of weaknesses or limitations of models, et cetera, et
cetera. But this was a point, I thought, where you sort of make
the counterpoint that, it's almost always better to have a model
than no model, it's better to have some...
Erica Thompson
Well, maybe. It depends what it is and it depends whose model it
is and it depends what the agenda is of the person who's
providing the model. And you can maybe take sort of both lessons
from the astrology example because I think you can find good
examples in the past of sort of vexatious astrologers or
astrologers with their own hidden agendas. Giving advice, which
was not at all useful or which was useful to themselves, but not
to the person who commissioned the forecast.
David Roberts
Yes. Or like the king deciding whether to invade a neighboring
country or something.
Erica Thompson
Right, yeah.
David Roberts
Not great for that. So given all these—and we've just really
skated over them, there's a lot more to all these—but given these
sort of limitations of mathematical models, this sort of
inevitable uncertainty about whether you're including the right
kinds of information, whether you're waiting different kinds of
information well, whether past performance is an indicator of
future performance, all these sort of limitations and the need
for expert judgment all, to my mind, leads to what I think is one
of your key points and one of the most important takeaways, which
is the need for diversity. Diversity, I think, these days has
kind of...the word conjures is sort of representational feel-good
thing.
We need to have a lot of different kind of people in the room so
we can feel good about ourselves and everybody can see themselves
on the TV or whatever. But you're making a much more...very
practical, epistemic point about the need for diversity of both
models and modelers. So start with models. What would it mean
to...like if I'm trying to forecast the future of the severe
climate events, I think the naive, a naive sort of Western way of
thinking about this would be: you need to converge on the right
model, the one that is correct, right. The one that represents
reality. And your point is: you never reach that. And so in lieu
of being able to reach that, what works better is diversity. So
say a little bit about that.
Erica Thompson
Yeah, that's exactly it. So, I suppose the paradigm for model
development is that you expect to converge on the right answer,
exactly. But I suppose what I'm saying is that because there
can't—for various mathematical reasons—be a systematic way of
converging on the right answer, because essentially because model
space has infinitely many dimensions—go into that in a bit more
detail for the more mathematically inclined—but because we don't
have a systematic way of doing that, the statistics don't really
work. So if you have a set of models, you can't just assume that
they are independent and identically distributed, sort of throws
at a dartboard and we can't just average them to get a better
answer.
So the idea of making more models and trying to sort of wait for
them to converge on this correct answer just doesn't actually
make much sense. We don't want to know that by making more
similar models, we will get the same answer and the same answer
again and the same answer again. Actually, what we want to know
is that no plausible model could give a different answer. So
you're reframing the same question in the opposite direction.
What would it mean to convince ourselves that no plausible model
could give a different answer to that question. Well, instead of
trying to push everything together into the center and, by the
way, that's what the models that are submitted to the IPCC
report, for example, do. They tend to cluster and to try to find
consensus and to push themselves sort of towards each other. I'm
saying we need to be pushing them away.
David Roberts
You talk about this drive for an Uber model, the, whatever the
CERN of climate models, this push among a lot of climate models
to find the sort of ER model, the ultimate model, and you are
pushing very much in the other direction.
Erica Thompson
Yeah, I mean, that has a lot to commend it as a way to sort of
systematize the differences between models rather than the ad hoc
situation that we have at the moment. So I don't completely
disagree with Tim Palmer and his friends who say that sort of
thing. It's not a silly idea, it's a good idea, but I think it
doesn't go far enough because it would help us to quantify the
uncertainty within "model land," but it doesn't help us to get a
handle on the uncertainty outside "model land," the gap between
the models and the real world. And so what I'm saying is that if
we want to convince ourselves that no other plausible model could
give a different answer, then we need to be investigating other
plausible models.
Now the word "plausible" is doing a huge amount of work there and
actually then that is the crux of it is saying, well, how can we,
as a community, define what we mean by a plausible model? Do we
just define it sort of historically by...stick with climate for a
minute. We've started with these models of atmospheric fluid
dynamics and then we've included the ocean and then maybe we've
included a carbon cycle and some vegetation and improved the
resolution and all that sort of thing. But couldn't we imagine
models which start in completely different places that model the
same sorts of things?
And if you had got a more diverse set of models that you
considered to be plausible and you found that they all said the
same thing, then that would be really, very informative. And if
you had a set of plausible models and they all said different
things, then that would show you perhaps that the models that you
had, in some sense, had a bit of groupthink going on, that they
were too conservative and they were too clustered. And I do have
a feeling that that is what we would find if we genuinely tried
to push the bounds of the plausible model structures.
Now, actually, then you run into the question of plausible, and
that's a difficult one, because now we're into sort of scientific
expertise. Who is qualified to make a model? What do we mean by
"plausible"? Which aspects are we prioritizing? And then we
introduce value judgments. We say you have to be trained in
physics or you have to have gone to an elite institution, you
have to have x many years of experience in running climate
models. You have to have a supercomputer. And all of these are,
sort of, barriers to entry to have a model which can then be
considered within the same framework as everybody else's. So this
is another...then the social questions about diversity start
coming up, but I start with the maths and I work towards the
social questions. I think that we can motivate the social
concerns about diversity directly in the mathematics.
David Roberts
Right, so you want a range of plausible models that's giving
you...so you can get a better sense of the full range of
plausible outcomes. But then you get into plausibility, you get
into all kinds of judgments and then you're back to the modelers.
Erica Thompson
Exactly.
David Roberts
And you make the point repeatedly that the vast bulk of models
used in these situations, in climate and finance, et cetera, are
made by WEIRD people. I'm trying to think of the Western... you
tell me.
Erica Thompson
Yeah, never quite sure exactly what it stands for. I think it's
Western, Educated, Industrialized, Rich, and Developed, something
like that. I suppose it's used to refer to the nation rather than
the individual person. But it's the same idea.
David Roberts
Right. The modelers historically have been drawn from a
relatively small...
Erica Thompson
From a very small demographic of elite people. Yeah, exactly.
David Roberts
And I feel like if there's anything we've learned in the past few
years, it's that it is 100% possible for a large group of people
drawn from the same demographic to have all the same blind spots
and to have all the same biases and to miss all the same things.
So, tell us a little bit about the social piece, then, because
it's not like the notion that you should have a degree or some
experience with mathematical models to make one and weigh in on
them. It's not...
Erica Thompson
It's not unreasonable.
David Roberts
Crazy, right. How would we diversify the pool of modelers?
Erica Thompson
So that's what I mean, it's a really difficult question because
it's what statisticians would call a "biospherians trade-off."
You want people with a lot of expertise, relevant expertise, but
you don't want to end up with only one person or one group of
people being given all of the decision-making power. So how far,
sort of, away from what you consider to be perfect expertise do
you go? And I suppose there may be the first port of call is to
say, well, what are the relevant dimensions of expertise? And you
can start with perhaps formal education in whatever the relevant
domain is, whether it's public health or whether it's climate
science.
But I think, then, you have to include other forms of lived
experience, you know, and I don't know what the answer looks
like. You know, I say in the book as well, what would it look
like if we were to get some completely different group of people
to make a climate model or to make a pandemic model or whatever.
It would look completely different. Maybe it wouldn't even be
particularly mathematical or maybe it would be, but it would use
some completely different kind of maths. Maybe it would be, you
know, I just don't know because actually I'm one of these WEIRD,
in inverted commas, people, myself. I happen to be female, but in
pretty much every other respect, I'm as sort of standard
modeler-type as it comes. So I just don't know what it would look
like. But I think we ought to be exploring it.
David Roberts
As I think through the sort of practicalities of trying to do
that, I don't know, I guess I'm a little skeptical since it seems
to me that a lot of what decision makers want, particularly in
politics, is that sense of certainty. And I'm not sure they care
that much if it's faux certainty or false certainty or
unjustifiable certainty. It is the sort of optics and image of
certainty that they're after. So if you took that out of
modeling, if the modelers themselves said, "Here's a suite of
possible outcomes, how you interpret this is going to depend on
your values and what you care about," that would be, I feel like,
sort of, epistemologically more honest, but I'm not sure anyone
would want that. The consumers of models, I'm not sure they would
really want that.
Erica Thompson
But it's interesting. You say that that's a reason not to do it,
I mean, surely that's a reason to do it. If the decision makers
are, sort of, somewhat dishonestly saying, "Well actually I just
want a number so that I can cover my back and make a decision and
not have to be accountable to anyone else. I'm just going to say,
'Oh, I was following the science of course.'"
David Roberts
Right.
Erica Thompson
Well, that sounds like a bad thing. That sounds like a good
reason to be diversifying, and that sounds like a good reason not
to just give these decision-makers what they say they want.
There are maybe better arguments against it in terms of...is it
even possible to integrate that kind of range of possible outputs
into a decision making process? Like would we be completely
paralyzed by indecision if we had all of these different forms of
information coming at us? But I don't think that, in principle,
it's impossible. For example, I would say that near-future
climate fiction is just as good a model of the future as the
climate models and integrated assessment models that we have. I
would put it, kind of, not quite on the same level, but pretty
close.
David Roberts
Have you read "The Deluge" or have you heard of "the Deluge"?
Erica Thompson
I've not read that one, no. I was thinking of maybe Kim Stanley
Robinson's "Ministry for the Future." But other explorations of
the near-future are available.
David Roberts
Right. I've read both. I just really have to recommend "The
Deluge" to you. I just did a podcast with the author last week
and it's a really detailed 2020 to 2040 walking through
year-by-year. And, obviously, fiction is specific, right? So
there's specific predictions, which are scientifically, sort of,
you'd never let a scientist do that.
Erica Thompson
But you can explore the social consequences and you can think
about what it means and how it actually works, how it plays out
in a way that you can't in a sort of relatively low-dimensional
climate model. You can draw the pictures, you can draw the sort
of red and blue diagrams of where is going to be hot and where is
going to be a bit cooler. But actually thinking about what that
would look like and what the social consequences would be and
what the political consequences would be and how it would feel to
be a part of that future. That's something that models, the
mathematical kind of models can't do at all. That's one of
their...that's one of the axes of uncertainty that they just
can't represent at all. But climate fiction can do extremely
well.
David Roberts
Yeah, I was going to say that book got me thinking about these
things in new ways, in a way that no white paper or new model or
new IPCC ever has.
Erica Thompson
Exactly. But if you're thinking of the models as being, sort of,
helping to form conviction narratives and they are sort of ways
of thinking about the future and ways of thinking collectively
about the future as well, as well as kind of exploring logical
consequences, then in that paradigm, the climate fiction is
really, genuinely, just as useful as the mathematical model.
David Roberts
Well, we've been talking about models in general and they're sort
of limitations. So let's talk about climate specifically, because
it sort of occurred to me, maybe this isn't entirely true, but
like the epidemiological thing and the finance thing, both, in a
sense, models play a big role in there, but there's also a lot of
direct experiential stuff going on. But it's sort of like climate
has come to us, the thinking public, almost entirely on the back
of models, right? I mean, that's almost what it is. You know what
I mean? Like you can see a severe weather event, but you don't
know that doesn't say climate to you unless you already have the
model of climate in your head.
So it's the most sort of thoroughly modelized field of sort of a
human concern that there is. And so all the kind of dysfunctions
that you talk about are very much on display in the climate
world. Let's just start by pointing out, as you do, the sort of
famous models that have been used to represent climate. DICE;
William Nordhaus's DICE model is famous. One of the earliest and
famous. One of the things it's famous for is him concluding that
four degrees—right there is the perfect balance of mitigation
costs and climate costs. That's the economic sweet spot.
And of course, like any physical scientist involved in climate
who hears that, who's just going to fall out of their chair.
Kevin Anderson, who you cite in your book, I remember almost word
for word this quote of his in a paper where he basically says,
"Four degrees is incommensurate with organized human
civilization." Like, flat out. So that delta. tell us how that
happened and what we should learn from that about what's
happening in those DICE-style models.
Erica Thompson
Well, I think we should learn not to trust economists with Nobel
Prizes. That's one starting point.
David Roberts
I'm cheering.
Erica Thompson
Good.
David Roberts
I'm over here cheering.
Erica Thompson
So, yeah, what can we learn from that? I mean, I think we can
learn, maybe, for a starting point, the idea of an optimal
outcome is an interesting one. Who says that there is an optimal?
How can we even conceptualize trading off a whole load of one set
of bad things that might happen with another set of bad things
that might happen?
David Roberts
Imagine all the value judgments involved in that!
Erica Thompson
Exactly, exactly, exactly. You're turning everything into a
scalar and then optimizing it. I mean, isn't...that weird, if
anything?
David Roberts
Yes. And you would think, like, how should we figure out how we
value all the things in our world? Well, let's let William
Nordhaus do it.
Erica Thompson
Yes.
David Roberts
It's very odd when you think about it.
Erica Thompson
You can read many other, even better critiques of Nordhaus's work
and, sort of, thinking about these different aspects of how the
values of outcomes are determined and how things are costed, and
of course, as he's an economist, everything is in dollars, so
it's the sort of least-cost pathway is the optimal one. So it may
indeed be that the lowest financial cost to global society is to
end up at four degrees, but that will end up with something that
looks very strange. Maybe there will be a lot more zeros in bank
accounts. Great, fine. But is that really what we care about?
David Roberts
Right. How many zeros compensate for the loss of New Orleans or
whatever?
Erica Thompson
Exactly. The loss of species across the planet and coral reefs
and all the rest of it? I think even the concept that you can put
these things on a linear scale and subtract one from the other
just doesn't make sense.
David Roberts
And also, one of the amusing features of these models that you
point out—which I have obsessed over for years—is, they sort of
assume, as a model input, that the global economy is going to
grow merrily along at 3% a year forever. And then and then, you
know, I have arguments with people about the effects of climate
change and they say, "Well, you know, it's not going to be that
big a deal. The economy is going to keep growing." And I'm like,
well, "How do you know that?" And they're like, "Well, that's
what the model says." And I'm like, "Well, yeah, that's because
you put it in the model!" Like, you can't put it in there and
then go later, go find it there and say, "Oh, look what we found,
economic growth." And they sort of they hold that 2% growth
steady and then just subtract from that, whatever climate does.
And the whole notion that...
Erica Thompson
I mean, the notion, I think everything is predicated on marginal
outcomes, that, as you say, everything will just continue as it
is, and climate change is only an incremental additional
subtraction on top of that. I think for anyone who has really
thought through—and perhaps we need to be sending these
economists some more climate fiction so that they can start
thinking through what the systemic impacts are of climate change.
Because yes, I can sort of see that if you thought climate change
was only going to be about the weather changing slightly in all
the different places, that you would say, "Well, what's the big
deal? The weather will change a bit and it'll be maybe a bit
hotter there and a bit wetter there and a bit drier there, and
we'll just adapt to it. You just move the people, and you change
your agricultural systems and grow different crops and raise the
flood barriers a bit." And all of those have a cost, and you just
add up the cost and you say, "Well, actually, we'll be able to
afford it. It'll be fine." So I can sort of understand how they
ended up with that view. And yet, as soon as you start thinking
about any of the social and political and systemic impacts of
anything more than very trivial perturbations to the climate, it
just becomes impossible to imagine that any kind of incremental
model like that makes any sense at all.
And yet this is sort of state-of-the-art in economics, which is
really disappointing, actually. It would be really nice to see
more.
David Roberts
You don't even need to send them climate fiction. As you say in
that chapter, even if they just went and talked to physical
scientists, if you just ask physical scientists or sociologists
or people from outside kind of the economic modeling world,
"What's your expert sense of what's going to happen?" None of
them say, "Steady economic growth as far as the eye can see, with
the occasional hiccup."
Erica Thompson
Yeah. So I think economics has become sort of wildly detached
from physical reality somehow, and I'm not quite sure how it
happened. And, you know, there are good people within the
economics profession fighting against that tide, but it seems
very hard to counter it. Nordhaus was getting his Nobel Prize in
2018, which is only five years ago.
David Roberts
Yes. Another quote that grabbed me is, in the sense of, we don't
know how to assign probability to some of these, sort of, big
kind of phase shift things that might happen, tipping points or
whatever you call them, or social tipping points. We don't know
how to assign probabilities to these things and so we don't put
them in the model. And so then the model tells us, "Don't worry,
these things aren't going to happen." But as you say, "Absence of
confidence is not confidence of absence."
Erica Thompson
Exactly.
David Roberts
And one point you make, your general point about climate models
is that they sort of represent a failure or several failures of
imagination. But as you say, making the models this way so they
only show marginal changes, so they basically show the status quo
out to the indefinite future with just 1% or 2% of GDP growth
shaved off. It's not benign to do the model that way because in
the model feeds back and affects how we think about the future.
The failure of imagination going into the model then comes back
out of the model and creates a failure of imagination.
This gets back to the sort of models not just being predictive
engines, but being narratives, stories, ways of thinking.
Erica Thompson
Yeah, these models change how we make climate policy. They change
how we think about the future. They change the decisions that we
make. They frame the way that we think about it. And so, I think
when we have economic models that say, "Four degrees is optimal."
or when we have climate models that sort of, I think, are not to
the same extent, but somewhat guilty of doing the same thing, of
projecting a future which looks much like the past, but with
marginal changes.
I think maybe modelers, physical modelers are becoming more
confident about the possibility of more radical change in the
physical system as well. It was interesting to see the change in
language around the Atlantic meridian overturning circulation,
for example, the Gulf Stream, which is such a big influence on
the climate of Northern Europe. And of course, it's also because
it transfers heat from the Southern Hemisphere to the Northern
Hemisphere. If that were to change, it would be a huge change to
the climate of the Southern Hemisphere as well. So it's not
solely a European concern.
But I think models over the past sort of 20-30 years have
been...again, it's sort of this trying to find consensus and
trying to look like the other models. And I wouldn't say it's
necessarily deliberate. It's just sort of you run a model and you
find that it does something a bit weird. So you go back and you
tweak it, and you do something a bit different, and you try and
get it to look more like the other models. Because you think that
if all the other models say something, then that must be sort of
what we're expecting. And we don't want to look too far out,
otherwise, maybe we won't get included in the next IPCC report.
David Roberts
Right. And if you're averaging out, it's the discontinuities and
the sudden breaks that kind of get thrown overboard if you're
trying to...
Erica Thompson
Exactly. And you start saying, "Well, this one's an outlier, so
maybe we won't include it in the statistics. Or this one, it just
doesn't look physically plausible." And of course, anything, as
soon as you start looking into the details, you're going to be
able to say it's wrong or you're going to find a bug or something
because it's wrong everywhere, because all models are wrong. But
that shouldn't be a problem because we make models, knowing that
we are making a simplification.
But if we investigate the ones that are more far out, with more
zeal to look for these errors and problems, we will find a reason
to discount them. So that is statistically worrying, because we
should have to sort of preregister our model runs and say,
"Actually, I'm going to run this set of model runs with these
sets of parameters, and it doesn't matter what the output looks
like. I'm going to consider those all to be equally likely."
Because if you start going back and pruning them, with respect to
your expert judgment about what it ought to look like, then
you'll end up with a distribution that looked like what your
preconception was, not like what the model was telling you.
David Roberts
Yeah, it's one thing to say any given sort of discontinuity or
outlier might be statistically unlikely, but to me nothing's more
statistically unlikely than 80 years of human history with no
discontinuities and no sharp breaks and no wiggles in the lines
of smooth curves. And another way, this way of modeling sort of
turns around and affects us is, as you say, as we are forming
policy. And, I guess I had had this in my head, but I thought you
crystallize it quite well, which is that if you look at these
models, these climate economic models...if you look at the ones
where climate change gets solved—right, it's just sort of the
steadily increasing curve of solar and the steadily increasing
curve of wind and everything sort of just like marginally inches
up to where it needs to be—when you think about it, that
representation excludes radical solutions. It excludes
everything, really, but price tweaks.
Erica Thompson
Yes, because that's the way these models are made. They are
cost-optimizing models, which are entirely determined by the
price that you happen to set. And so the integrated assessment
models that we're talking about, they include costs on different
energy system technologies. So a cost for nuclear and a cost for
renewables and a cost for anything else you want to put in. And
depending on what it costs, it will rely more or less on that
particular technology. But of course, behavior change could just
as well be put in.
How much would it cost in dollars per ton of carbon avoided to
change people's behavior so that you use less electricity, for
example? Maybe we're starting to see that with all the stuff
about, you know, conserving energy in light of the Ukrainian
crisis, but how much would that cost? And it would be completely
arbitrary to say how much it would cost, because it's so
dependent on social and political whims and the winds of change
and the trends in society. It doesn't really make sense to try
and put a price on it because it would depend on how it's framed
and who's doing it and all of that.
David Roberts
Right. Or like, what is the dollar value of a social uprising
that results in social democracy like that? How do you price
that?
Erica Thompson
And also on the technologies. I mean, I'm sure you've discussed
this before on your podcast, but the cost of carbon capture and
storage, how much is that going to influence the pathways that we
have? And you see the pathways more and more are dependent on a
lot of carbon capture at the end of the century in order to make
everything balance out. If you put it in with a high cost, then
you won't use it. If you put in with a low cost, you'll use loads
of it.
And then is that performative or is it counterperformative? Is it
the case that the policymakers look at it and say, "We're going
to need loads of this interesting technology and we don't have it
yet, I'd better put loads of money into investing and developing
it." Or do they look at it and say, "Oh, this means that the
economic forces that are acting in the climate domain mean that
it will be highly economic to do air capture at the end of the
century and therefore governments don't need to do anything and
we'll just wait and it will happen because it's determined by the
market." Which way are they thinking? I have no idea.
David Roberts
Right.
Erica Thompson
But those are really different, and they result in really
different futures. They don't result in the future that was
predicted.
David Roberts
Right. This gets to moral hazards and model hazards, which I hope
you can segue into here because I found that those two concepts
also quite helpful.
Erica Thompson
So the next one I think that is going to end up in these models
is geoengineering, for example. And so you could equally well put
into the same model with the same framework. It would be then in
terms of sort of either dollars per ton of carbon equivalent in
the atmosphere, but negative for the amount of shading that you
could get for a certain amount of stratospheric aerosol injection
or whatever your favorite technology is, but you could, in
principle, stick that in.
And what is the price that you're going to put on it? If you put
it in at $2,000 per ton of CO2, it's not going to happen. If you
put it in at $2 per ton of CO2, it's going to be totally relied
on and it will be the linchpin of all successful trajectories
that meet the Paris targets by 2100. And if you put it in
somewhere in between, you'll get more or less of it, depending on
that price point. So who decides what price point it's going to
go in at?
David Roberts
Yes, and you really capture the sort of oroboros nature of this.
So we add up all the technologies we have, there's a hole left,
we say we're going to carbon capture that hole. That's how we're
going to fill that hole in our mitigation. And then we turn
around and look at the model where we stuck this arbitrary amount
of carbon capture in and turn around and say, "Oh, well, we have
to do carbon capture because that's what the model said is
needed." And again, it's like, wait a minute, you went and put
that label on the hole in the model?"
Erica Thompson
Yes.
David Roberts
And then you went in and found it in the model and are now
claiming that the model is telling you you have to do this, but
it just says you have to do this because you're hearing an echo
of your own decisions.
Erica Thompson
Exactly. But I think, more generally, that's what these models
are doing for us. They encapsulate a set of expert judgments and
opinions and they put them into a mathematical language. But that
doesn't make them any more objective. It perhaps makes them
slightly more logically self-consistent with the different
numbers that have got to chime with each other, but it doesn't
actually make them any more authoritative and objective than if
they were just written down or spoken.
David Roberts
Well, it insulates them.
Erica Thompson
It insulates them from criticism.
David Roberts
Public scrutiny.
Erica Thompson
Yes, absolutely.
David Roberts
It gives them the vibes of expertise that daunts people and keeps
people away.
Erica Thompson
Yes.
David Roberts
And so carbon capture right now is playing that role. We just
sort of decided arbitrarily we need x amount of carbon capture
because that's how much mitigation we have left to do that we
don't know how to do with other sources. And we're arbitrarily
deciding on the price of carbon capture because we don't know
what that price is because it doesn't really exist at scale yet.
So we're making these arbitrary decisions.
Erica Thompson
Exactly. It was going to be renewables and renewables weren't
fast enough, so then it had to be something else. And then it was
going to be carbon capture and storage and that wasn't quite
enough. So now it's direct air capture and next it's going to be
geoengineering. I mean, I can't see another way around that. That
is the trajectory that these models are taking. And once the
geoengineering is in the models, then it will become a credible
policy option, an alternative. So we need to be ready for that.
David Roberts
Well, this point you're making so disturbed me that I wrote the
whole quote down from the book. You say, "If the target of
climate policy remains couched in the terms of global average
temperature, then stratospheric aerosol geoengineering seems to
be now to be an almost unavoidable consequence in its inclusion
and integrated assessment models will happen in parallel with the
political shift to acceptability." That's just super disturbing.
So we're just sort of assuming a can opener to fill these holes
in our models and then we're finding a can opener in our model,
and we're like, "oh my god, we got to go build."
Erica Thompson
Yes. And so this is why I think it's so important that we move
the discussion from technology and away to values. I think that
stratospheric aerosol injection could be a perfectly legitimate
and reasonable solution, but it must be one that we've talked
about, and it must be one that we understand what value judgments
are being made. What trade offs are being made? What kind of
solutions are being ignored in favor of doing this technological
thing? What kind of other options are favored by different people
and different kinds of people?
Because geoengineering, the sort of big, sexy technological
project, is a very tech bro solution. It's a very top-down,
mathematical elitist, predict, and optimize...it's in the same
vein as all of these economic things. It's about optimization and
calculation.
David Roberts
I always think about the guy who wanted to blow up a nuclear bomb
on some Alaska coast to make a better harbor.
Erica Thompson
Yeah, right. So it's about one-dimensional outcomes. If you say,
"All we want is a harbor." Okay, go ahead and do the nuclear
bomb, because it will achieve your objective. And if literally
the only objective of climate policy is to keep global average
temperature below two degrees, then geoengineering will probably
be the most cost effective and easy way to do that. But, it is
not the only thing that matters. The future of global democracy,
the values of different citizens. What kind of future are we
trying to get to? So I think this is another problem of the way
that we typically model, is that it starts with an initial
condition of where we are now, and then everything spreads out
and everything becomes more uncertain as you look forward in
time.
And that kind of leaves people twisting in the wind, wondering,
well, what is this future going to look like? We just don't know.
It's really uncertain. It's really scary. It could be this, it
could be that. It could be catastrophe. And actually, I think
politically and in terms of thinking maybe more in conviction
narratives, what we need to be doing is coming up with a vision
for 2100, articulating a vision for what the future would look
like if we had solved the problem that we have.
And it's not just climate change. It's resource scarcity, and
it's sociopolitical questions. And ultimately, it's a much
bigger, kind of almost theological question about how humanity
relates to the planet that we happen to find ourselves on. You
know, these are big, big questions, and they're not technical
questions. They're social and political and spiritual questions
about what we're doing here and what we want society to look
like. And so, if you if you had a vision of the future, of what
you want 2100 to look like and how people should be living with
each other and how, politically, we should be thinking about our
problems then you say—and then you use your model in a different
mode—you say, "If we're aiming for that kind of future, what do
we have to do one year from now, five years from now, ten years
from now, thirty years from now, in order to stay on track for
that future that we want?"
Rather than just saying, "We are starting here from this initial
condition, and we have all these possible outcomes, possible
trajectories kind of diverging forward from us." That's a
really...much harder sell and it's harder to communicate. And I
think it lends itself towards this one-dimensional thinking of
saying, "We have global mean temperature is the problem." Well,
global mean temperature is not really the problem. Geopolitics is
the problem.
David Roberts
Nobody lives in mean temperature.
Erica Thompson
Nobody was ever killed by global mean temperature. People are
killed by things that happen locally.
David Roberts
And if you're envisioning the 2100 you want, nobody's envisioning
a global mean temperature.
Erica Thompson
But people may be envisioning very different things. And then I
think it is interesting to listen to some of the people who might
call themselves climate skeptics. What is it that they're afraid
of? It's sort of authoritarian global government and all that
sort of thing. And is that, in fact, what climate models and the
larger scale modeling community are kind of being shepherded into
propping up? I mean, what is it, politically, that is convenient
about this kind of model as opposed to another kind of model or
another kind of way of thinking about the future and orienting
ourselves towards the future?
David Roberts
This is, I think, something the book conveys really well is that
if you think about adequacy to purpose and you think about,
"Well, what is the purpose?" And the purpose of achieving a
desired sociopolitical outcome in 2100 is very different than the
goal of achieving average mean temperature. But just because
you're targeting average mean temperature doesn't mean you you're
not making a political statement. The political statement you're
making is: "We want to preserve the status quo." Right? We want
everything to stay the way it is with a few tweaked parameters.
I'm sure the modelers probably wouldn't sort of explicitly say
that.
Erica Thompson
No, and I think it's harder to make that argument for climate
models than for economic models. You know, the physics of climate
is somewhat different from the economics of climate.
David Roberts
Well, the climate economic models, I mean.
Erica Thompson
Yeah, the economic models. No, absolutely. And it's all in there
in that one-dimensional reduction of everything to costs. If we
reduce everything to costs and we say, then actually the amount
that African GDP will change by if African GDP decreases by 80%
versus American GDP increasing by 20%, maybe that's an adequate
trade off. You turn it into something...again, this just doesn't
make sense. Like we have to be thinking about the moral and
ethical content of these statements.
When you say "A dollar is a dollar is a dollar," then actually if
you say that and you are happy to trade off 80% of GDP in
Sub-Saharan Africa against 20% increase in GDP in Northern Europe
or the US, which is what some of these economic models end up
effectively doing, that's an enormous ethical judgment and one
that I think, if it were made clearer, people simply wouldn't
agree with.
David Roberts
That's a more elegant way of putting the point that I frequently
put bluntly to modelers about this, which is: you could wipe out,
I mean, never mind 80% of the GDP, you could just wipe out the
entire continent of Africa, and it wouldn't have a very big
effect on the course of global GDP. So is that okay? Are we
optimized still if we've lost all of Africa?
Erica Thompson
This is one of the successors to Nordhaus. There are other papers
in climate economics which take a more, you know, a slightly more
realistic view. And so, I was asked for a comment on a paper
about, effectively the same thing, the sort of average
temperature and the optimal pathways. And so they look and find
that an increase of a few degrees would reduce the GDP of Africa
by something like 80%. You know, very dramatic. And and you say,
"Is this is it remotely credible to think that one could have
absolute economic crisis in some of the largest nations on Earth
without that having any feedback effect on the rest of the
planet?"
David Roberts
And they just meekly accept it. They're like, "Whoa, dang it.
Dang it."
Erica Thompson
Regardless of whether you consider it ethically acceptable, do
you really think that it can happen without any geopolitical
implications? Is the billionaire sitting there in the bunker in
New Zealand going to be happy with a few extra zeros on the end
of their bank account as the world collapses around them? I mean,
are they really? I really am interested to know what the kind of
thought process is there. Like, I don't quite understand how you
come to what seems to be the conclusion that you should be
hoarding the resources and then holding up in a bunker in New
Zealand.
David Roberts
Oh, my goodness. I don't know if you saw recently the article by
David Rothkopf where he was summoned basically to a panel of
millionaires.
Erica Thompson
Oh, yes, I did see that one
David Roberts
And they were asking him questions about their bunkers. And
whatever low opinion you might have had of them. It's not low
enough. The questions about their bunkers are so naive.
Erica Thompson
Yeah, it's depressing.
David Roberts
So in "model land," in some sense, it's absolutely wild.
Erica Thompson
But this is the economic mentality of saying that "The zeros on
the bank account are all that matters and that I am an individual
and I am not part of a society and I can thrive regardless of
what the rest of the planet looks like." It's that sort of
divorce from reality that, somehow, somehow, some group of
people—and perhaps it's an extreme version of the mentality of
the economists and the economic models that are making these kind
of projections and saying that this kind of thing can happen.
David Roberts
So, taking your recommendations, I mean, you have at the end of
the book, five recommendations for better modeling, and I think
people can probably extrapolate some of them from what we've said
so far. You bring in more kinds of perspectives. You bring in
more different kinds of models, you take outliers more seriously,
things like that. But if you did all those things, what you would
be doing is stripping away a lot of the kind of faux objectivity
that we have now and exposing the fact that there's a hole that
can only be filled by expert judgment or by judgment, really, by
human judgment.
And that is terrifying, I think, to people, particularly people
making big decisions that involve lots of people. They're
desperate for some sense of something solid to put their back
against, right, something that they can reference if they're
questioned later about why they made the discussion. So I wonder
if in a sense, this is not problems that are arising out of just
sort of bad modeling, but in some sense these problems are
downstream from a very basic, sociocognitive need for certainty
and fear of, sort of, openly exercising judgment and openly
defending ethical positions. Do you know what I mean? In some
sense, that fear is what produced this situation rather than vice
versa.
Erica Thompson
Yeah, I don't disagree. I think they kind of have gone together
and as the models and the idea that the science can give us an
answer—and the promise of the scientists that science will be
able to give us an answer—as the scientists have kind of gone,
"Oh, hey, we could do that. And we could do this. And we could do
the other thing as well. And we can give you an answer and just
give us a few more million pounds and a better computer and we'll
give you more answers and better answers, and then we'll start
applying some AI as well, and we'll automate it all." And
eventually, you won't even need to think about it. You can just
follow the science.
David Roberts
Follow the science.
Erica Thompson
Follow the science. I really don't like, "Follow the science."
David Roberts
I hate that term so much, I was literally cheering in my bed
reading this part. But you say, what to me always seems so
obvious, and yet when I try to talk about this on Twitter or in
public, I just get the weirdest backlash. But I just want to tell
people in the climate world, like, science does not tell you what
to do. Quit claiming that we have to do X, Y, and Z because
science says so. That's just not the kind of thing that science
does!
Erica Thompson
Science hopes to be able to tell you, like, in the best case
scenario, science can tell you if you do A, this will happen, and
if you do B, that will happen. And if you do C, that will happen,
but it doesn't have an opinion, in theory, on which of those is
the best outcome. Now, in practice, the kind of science that we
do and the way that I've sort of described that values and
judgments do enter into the modeling process, actually, we do to
some extent have an entry of those value judgments into even that
beginning section. If A, then what? And if B, then what? And if
C, then what?
But, you can't get from an "is" to an "ought," you have to
introduce value judgments. You have to say, "I prefer this
outcome." And ideally, if you're making decisions on behalf of a
large group of people, that has to be in some way representative,
or at least you have to communicate, "I want this outcome for the
following reasons." And so, I would really like to see an IPCC
working group for, which is about ethics and value judgment and
the politics of climate change, and says, "Well, why is it that
people disagree?"
Because I think if you go to climate skeptic—again sort of in
inverted commas—conferences, or if you talk to them, they are not
idiots and they are not uncaring. They tend to be people who
genuinely care about the future and about their children's
prospects and all the rest of it. And okay, many people find them
very annoying, but the point is that their underlying motivation
is actually very similar to most other people, and they just have
quite different assumptions about either what the future will
look like, perhaps misconceptions about the facts as well, in
some cases. But a lot of that is motivated by a worry about the
political outcomes of what people saying, "follow the science"
are telling you to do.
David Roberts
Right, exactly. And I think they sense, in some ways, almost more
than sort of your average kind of lefty climate science believer
does, that there are value judgments being smuggled past them
undercover of science.
Erica Thompson
I mean, it's easier to spot value judgments when they are not
your own value judgments, because if they are your own value
judgments, then you don't really notice them, you just think it's
natural. And so this is another good argument for diversity in
modeling, because in order to be able to see these value
judgments, they are much more easily uncovered by somebody who
doesn't share them.
David Roberts
Even just to say, "Humanity is worth preserving, we should
preserve the human species." That, in itself, is a value
judgment.
Erica Thompson
Yes, a value judgment. Absolutely.
David Roberts
Science is not telling you you need to or have to do that. I sort
of wonder and this is talk about unknowables, but if the IPC did
that and really did systematic work bringing all these value
judgments, sort of dragging them out of their scientific garb and
exposing them to the light and reviewing how different people
feel about them, do you feel like that would? Because, I know
your average weird science-model bro. His fear about that is,
well, if you do that, then everybody will just think they're
relative and they can choose whatever they want and, you know,
it'll be chaos. But do you think that's true or do you think it
would help?
Erica Thompson
I don't know whether it would help. I mean, I think I think that
it would help to separate the facts and the values because I
think people who disagree on the values are because there is no
conversation about the values. They are left with the only thing
that they can get their hands on is the model and, effectively,
the facts, the science. And so they start doing...making
sometimes, what are quite reasonable, questions about statistics
of model interpretation and, sometimes unreasonable, criticisms
about, say, the greenhouse effect.
Now, if we could separate that out and say, actually, we agree
that the greenhouse effect is a real thing because this is basic
physics and actually criticizing that doesn't make any sense. But
we will entertain your difference of value judgments about the
relative importance of individual liberties and economic growth
versus the value of other species or of human equality or
whatever, all of these other things. You can stick it all in
there and say we allow you to have a different opinion and then
maybe we can agree to agree on the facts. So I think it probably
wouldn't work, because things are probably too far gone for that
to actually result in any form of consensus.
But I think if we could sort of bottom that out and say "What is
it that you're most scared of?" to everybody. "What is it that
you're most scared of losing here?" I think that would be a
really revealing question, and I think that would that would also
help to incorporate different communities and more diverse
communities into the climate conversation because I think then
you're into questions about, well, really, what is it that you
care about? What are you scared? What future are you most scared
of? Are you most scared of a future where society breaks down, in
inverted commas? But is it because you're scared of other people?
Or is it because you are worried about not having the economic
wealth that you currently enjoy?
Or is it because you are scared of losing the biodiversity of the
planet? Or...there are so many things that people could kind of
put in that box.
David Roberts
Or are you most scared of losing your gas stove?
Erica Thompson
Yes, that's an interesting one, isn't it? So why has that become
such a big thing?
David Roberts
Really is, right? There's layers to it.
Erica Thompson
There's layers, but there's layers on both sides. I mean, there's
the kind of the instinctive, "Don't tell me what to do," but
there's also, "Well, why are you telling people what to do? What
is the information not sufficient?"
David Roberts
Right?
Erica Thompson
What is the kind of knee jerk requirement to regulate versus the
knee jerk response against regulation? They're both kind of
instinctive political stances.
David Roberts
Yes, and a lot of values...
Erica Thompson
With a whole load of other things tangled up in them. Which, I'm
not an American, so I hesitate to go any further than that.
David Roberts
Yes, well, there are layers upon layers that you can even
imagine. They're like local political layers. It goes on and on.
I'm doing a whole podcast on it and I'm worried how to fit it all
into 1 hour. It's just on gas stoves. And I also think...to
follow up on the previous point you're making, the model
centric-ness of our current climate dialogue and climate policy
dialogue, I think just ends up excluding a lot of groups, who
have things to say and values. And, you know, the sort of cliche
here is the sort of Indigenous groups, you know, they have
relationships with the land that are extremely meaningful and
involve particular patterns, and those things are of great value.
But if they're told at the door quantify this or...
Erica Thompson
Quantify this or it doesn't count, yeah.
David Roberts
...stay out, then they're just going to stay out. So...
Erica Thompson
Yeah.
David Roberts
...at the very least it would be a more interesting dialogue if
we heard from more voices.
Erica Thompson
Yeah, but I mean, I think we have to sort of internalize and
accept the idea that people with less education, you know, formal
education in the sense that sort of we consider there to be a
hierarchy of people with more letters after their name are more
qualified, and therefore more qualified to inform climate policy
and more qualified to have a view on what they think the future
should be like. I realize it's a somewhat radical position, but I
think that everybody has a valid opinion and a right to an
opinion about what they want the future to look like.
David Roberts
Yes, we're just back to...it's funny we're talking about it in
the realm of climate but as you say in the book, there's just a
million realms of sort of human endeavor, especially collective
human endeavor. We're running into these same kind of things. We
don't really seem to know how to have honest, transparent
arguments about values anymore.
Erica Thompson
And we find it really hard to talk about values at all. It's
really hard, even like if a scientist stands up and says that
they love and care about something, that's kind of a weird thing
to do. Why would you do that? We're all a bit uncomfortable.
You're biased. Exactly.
David Roberts
Biased in favor of life.
Erica Thompson
When you start saying that sort of thing, maybe your science is
corrupted by it. We can't have that.
David Roberts
Yes, I know. And just like convincing another thing I get yelled
at about online just trying to convince people that you are an
embedded creature. You have a background, you are socialized to
think and feel particular ways, like, you are coming from a place
and it's worth being aware of what that place is and aware of how
it might be influencing your thinking and aware of other
ways...blind spots. and just like people give very...
Erica Thompson
And aware that some peoples, places, and situations are noticed
more than others. If you are a, sort of, white male,
well-educated tech bro, then your personal background and
situation is not scrutinized the way it is if you are someone
"different," in inverted commas, in whatever way that might be.
David Roberts
And the more privilege you have, the more incentive you have to
think that your opinions are springing from the operation of pure
reason.
Erica Thompson
Are "objective" and "neutral."
David Roberts
When your value judgments are "hegemonic," let's say...
Erica Thompson
Exactly.
David Roberts
It's all to your benefit to keep them hidden, right? You don't
want them dragged out into the light. Anyway, okay, I've kept you
for way longer than I thought I would. As I said, I love this
book. There's one more thing I wanted to touch on just briefly,
and this is a bit of a personal goof, but I in another lifetime,
many, many moons ago, studied philosophy in school. And you slip
a line in here early, early in the book when you're talking about
what models are and just sort of what you mean by model, and you
talk about how they're just ways of structuring experience so
that we can make sense of it and predict it.
And when you think about it that way, as we said earlier in the
conversation, pretty much everything is a model. Like, we're not
processing raw data, right? We're filtering from the very
beginning through our, sort of, models. And you slip in this line
where you say, in this sense, real laws, like, say, speed of
light or gravity or whatever are only model laws themselves,
which is to say, all our knowledge, even the knowledge we think
of as most objective and sort of straightforward and unmediated
is in a model. And therefore all the things you say about our
relationships with our models and how to do better modeling, it
seems to me, all that applies to all human knowledge, right?
Erica Thompson
Yes. I mean, you're really in the rabbit hole now, but yes. What
is it that convinces you that the speed of light is the same
today as it will be tomorrow?
David Roberts
Exactly.
Erica Thompson
I mean, how do you know? How do you know? What is it that gives
you that confidence? I mean, I think you can reasonably have
confidence in many of these things. And of course, the
mathematics is, as somebody else said, unreasonably effective in
the natural sciences. There is no a priori reason to think that
it ought to be, so don't worry too much about it. I think that we
can make an empirical observation that the laws of physics do
work really well for us and that models are and can be incredibly
successful in predicting a whole load of physical phenomena and
can be genuinely useful and can be calibrated. And we can have
good and warranted reliance on those models to make decisions in
the real world. So, yes, you're right that, technically, I think
there is a problem all the way down, but we do have more
confidence in some areas than others.
David Roberts
Well, this course you're charting between, on the one hand, sort
of naive logical positivism, right. That we're just sort of
seeing reality, and on the other hand...
Erica Thompson
Naive skepticism that says we just can't do it.
David Roberts
Hopeless relativism.
Erica Thompson
Yes, exactly.
David Roberts
We have this middle course, which I associate very strongly with
the American pragmatists, James Dewey, and then on and on later
into Rorty. I don't know if you ever got into that or studied
that, but this sort of practical idea that we know nothing for
certain, but we do know things. And to say that we only believe a
model because it's worked in the past—and we don't have any sort
of absolute metaphysical certainty that it maps on to reality
will work again—is not disqualifying like that's just the nature
of this is just the nature of human knowledge.
Erica Thompson
It's as good as we can get. You just can't have full certainty.
David Roberts
But it works.
Erica Thompson
It works. It's good.
David Roberts
Like some things work. And something, to me, this is pragmatist
epistemology all over again. So I don't know if anybody's ever
brought up that parallel with you.
Erica Thompson
Yeah, I'm not a philosopher, and I'm kind of only tangentially
involved with philosophy of science, and there are many different
streams of thought within that, but, yes, it sounds very much
like that.
David Roberts
Well, those were all my beloved...that's what I studied back when
I studied philosophy, and so a lot of this stuff that you're
saying throughout, I was like, "this is not just about
mathematical models, this is just about how to be a good,
epistemic citizen." Right? How to think well.
Erica Thompson
Well, that would make a good subtitle.
David Roberts
Yeah. I thought you might want to rein in a little short of...
Erica Thompson
Maybe the next book.
David Roberts
...of those kind of grand claims. But I really do think that,
even people who aren't interested in mathematical modeling as
such, can learn from this just about how to have, what's it
called, "negative capacity." Just, sort of, a bit of distance
from your own models, a little sense that you're not bound up in
your own models, the sense that models are always, in some sense,
qualified and up for debate and change. I just think it's a good
way to go through the world.
Erica Thompson
And just how to think responsibly about models in society, think
critically and think carefully about what it implies to use these
models and to have them as important parts of our decision-making
procedures. Because they are, and they're going to stay that way,
so we need to get used to it, and we need to understand how to
use them wisely.
David Roberts
Right. "Good tools, poor masters," as they say about so many
things. Yes, and this is what you kept referencing expert
judgment. But I kept coming back again and again throughout the
book to the term, "wisdom," which is a little bit fuzzy, but
that's exactly what you're talking about. It's just...
Erica Thompson
Yes, yes it is.
David Roberts
...accumulated good judgment. That's what wisdom is.
Erica Thompson
Wisdom and values and understanding, having leaders, I think, who
can embody our values and show wisdom in acting in accordance
with those values. I think that's something that has kind of gone
out of fashion, and I would really like it to see it come back.
David Roberts
True, true. Well, thank you so much. Thanks for coming on and
taking all this time. I really, as I say, enjoyed the book, and
people are always asking me to read climate change books, and,
you know, like, 90% of them are like, "I know all this. Like,
you're just telling me things. I know." But I would I would say
if I was going to recommend a climate book to people who already
know about climate and they're familiar with the science, I would
recommend this book because it's, just, how to think about
climate change, is one of the most important still, I think one
of the most live and important discussions around climate change
is just, how do we cognize this? Like, how do we act in the face
of this? How do we think about how to act in the face of this?
And I think your book is a great guide for that. So, thank you.
Erica Thompson
Fantastic. Well, thank you so much for having me. It's been fun.
David Roberts
Thank you for listening to the Volts podcast. It is ad-free,
powered entirely by listeners like you. If you value
conversations like this, please consider becoming a paid Volts
subscriber at volts.wtf. Yes, that's volts.wtf, so that I can
continue doing this work. Thank you so much and I'll see you next
time.
This is a public episode. If you'd like to discuss this with other
subscribers or get access to bonus episodes, visit
www.volts.wtf/subscribe
Weitere Episoden
1 Stunde 15 Minuten
vor 1 Monat
1 Stunde 27 Minuten
vor 1 Monat
1 Stunde 15 Minuten
vor 1 Monat
1 Stunde 4 Minuten
vor 1 Monat
1 Stunde 9 Minuten
vor 2 Monaten
Kommentare (0)
Melde Dich an, um einen Kommentar zu schreiben.