Circulation October 16, 2018 Issue

Circulation October 16, 2018 Issue

Circulation Weekly: Your Weekly Summary & Backstage Pass To The Journal
25 Minuten

Beschreibung

vor 7 Jahren

Dr Carolyn
Lam:               
Welcome to Circulation on the Run, your weekly podcast summary
and backstage pass to the journal and its editors. I'm Dr Carolyn
Lam, associate editor from the National Heart Center, and Duke
National University of Singapore. Will artificial intelligence
replace the human echocardiographer? Aha, well to find out the
answer, you have to wait for the incredibly exciting discussion
of today's feature paper coming right up after these summaries.


                                               
The clinical benefits of the cholesterol ester transfer protein,
or CETP inhibitor dalcetrapib depends on adenylate cyclase type
9, or ADCY9 genotype. However, what are the underlying mechanism
responsible for the interactions between ADCY9 and CETP activity?
In the first paper from today's journal first author Dr
Rautureau, corresponding author Dr Tardif from Montreal Heart
Institute, and colleagues used a mouse atherosclerosis model
inactivated for ADCY9 and demonstrated that loss of ADCY9
protected from atherosclerosis and was associated with improved
endothelial function, but only in the absence of CETP. ADCY9 in
activation increased weight gain, adipose tissue volume, and feed
efficiency, but only in the absence of CETP.


                                               
This mouse model reproduced the interactions between ADCY9 and
CETP activity observed in patients, and offers new mechanistic
insights for the importance of ADCY9 in determining the responses
to CETP inhibition. For example, the dal-GenE clinical trial is
currently testing prospectively whether patients with coronary
disease and the favorable ADCY9 genotype will benefit from
dalcetrapib.


                                               
The next study addresses the controversy around the
cardioprotective effects of Omega-3 polyunsaturated fatty acids,
and uncovers signaling pathways associated with eicosapentaenoic
acid, or EPA supplementation that may mediate protective effects
in atherosclerosis. First author Dr Laguna-Fernandez,
corresponding author Dr Bäck from Karolinska Institute, and their
colleagues showed that EPA supplementation significantly
attenuated atherosclerotic lesion growth. They performed a
systematic plasma lipidomic analysis and identified that 18
monohydroxy eicosapentaenoic acid was a central molecule formed
during EPA supplementation. 18 monohydroxy eicosapentaenoic acid
was a precursor for the plural resolving lipid mediator called
resolvent E1.


                                               
In the present study, a resolve in E1 was shown to regulate
critical atherosclerosis related functions in macrophages through
its downstream signaling receptor to transfuse protective effects
in atherosclerosis.


                                               
Are there racial differences and long-term outcomes among
survivors of in-hospital cardiac arrest? In the next paper first
and corresponding officer Dr Chen from University of Michigan and
her colleagues performed a longitudinal study of patients more
than 65 years of age who had an in-hospital cardiac arrest and
survived until hospital discharge between 2000 and 2011 from the
National Get With The Guidelines Resuscitation Registry whose
data could be linked to Medicare claims data. They found that
compared with white survivors of in-hospital cardiac arrest,
black survivors had a more than 10% lower absolute rate of
long-term survival after hospital discharge. This translated to a
28% lower relative likelihood of living to one year, and a 33%
lower relative likelihood of living to five years after hospital
discharge for black versus white survivors.


                                               
Nearly one-third of the racial difference in one-year survival
was dependent on measured patient factors. Only a small
proportion was explained by racial differences in hospital care,
and approximately one-half was the result of differences in care
after discharge, or unmeasured confounding. Thus, further
investigation is warranted to understand to what degree
unmeasured, but modifiable factors, such as post-discharge care
may account for the unexplained disparities.


                                               
The next study provides insights into a novel mechanism of
atherogenesis that involves protease-activated receptor 2, a
major receptor of activated factor 10, which is expressed in both
vascular cells and leukocytes. Co-first authors Dr Hara and
Phuong, corresponding author Dr Fukuda from Tokushima University
Graduate School of Biomedical Sciences, and their colleagues
showed that in ApoE-Deficient deficient mice, protease-activated
receptor 2 signaling activated macrophages and promoted vascular
inflammation, increasing atherosclerosis.


                                               
Furthermore, they showed that in humans, plasma-activated factor
10 levels positively correlated with the severity of coronary
artery disease, suggesting that the signaling pathway may also
participate in atherogenesis in humans. Thus, the
protease-activated receptor 2 signaling pathway may provide a
novel mechanism of atherogenesis and serve as a potential
therapeutic target in atherosclerosis.


                                               
The next paper tells us that biomarkers may help to predict
specific causes of death in patients with atrial fibrillation.
First and corresponding author Dr Sharma and colleagues from Duke
Clinical Research Institute evaluated the role of biomarkers in
prognosticating specific causes of death among patients with
atrial fibrillation and cardiovascular risk factors in the
ARISTOTLE trial.


                                               
They looked at the following biomarkers: high sensitivity
troponin T, growth differentiating factor 15, N-terminal
pro-B-type natriuretic peptide, and interleukin 6. They found
that sudden cardiac death was the most commonly adjudicated cause
of cardiovascular death, followed by heart failure and stroke or
systemic embolism deaths. Biomarkers were some of the strongest
predictors of cause-specific death, and may improve the ability
to discriminate among patients' risks for different causes of
death.


                                               
How do the complement and coagulation systems interact in
cardiovascular disease? Well in the final original paper this
week, first author Dr Sauter, corresponding author Dr Langer from
Eberhard Karls University Tübingen, and their colleagues used
several in vitro, ex vivo, and in vivo approaches as well as
different genetic mouse models to identify the anaphylatoxin
receptor C3AR and its corresponding ligand C3A as platelet
activators that acted via intra -platelet signaling, and resulted
in activated platelet fibrinogen receptor GP2B3A. This in turn
mediated intravascular thrombosis, stroke, and myocardial
infarction. This paper, therefore, identifies a novel point of
intersection between the innate immunity and thrombosis with
relevance for the thrombolic disease of stroke and myocardial
infarction.


                                               
That wraps up with week's summary. Now for our featured
discussion.


                                               
Can we teach a machine to read echocardiograms? Well today's
feature paper is going to be all about that. I am so excited to
have with us the corresponding author of an amazing, and I think,
landmark paper, Dr Rahul Deo from the One Brave Idea Science
Innovation Center and Brigham and Women's Hospital in Boston, as
well as our associate editor Dr Victoria Delgado from Leiden
University Medical Center in The Netherlands. Now let me set the
scene here. We know that echocardiography is one of the most
common investigations that we do in cardiology, and in fact even
outside of cardiology, and it is hands down the most accessible,
convenient tool to image the heart.


                                               
Now let's set this up by remembering that echocardiograms are
performed with machines, but led by echocardiologists like me.
Now this is really scary Rahul because I think your paper is
trying to say ... Are you trying to put people like me out of
business?


Dr Rahul
Deo:                   
Definitely not. I think what I'm hoping to do is actually two
things. One of them is, despite the fact that it's an accessible
and safe tool, because it needs people like us, it's probably not
used as often as ideally it could be. So part of our hope was to
democratize echocardiography by being able to take out some of
the expenses from the process so that we can hopefully get more
simpler studies done at an earlier stage in the disease process.
Because in many ways, at least from my experiences being an
attending, it feels like if we could just have gotten to these
patients earlier we may have been able to start therapy that
could've changed the disease course, but our system can't really
afford to do huge numbers of echoes on asymptomatic patients.
Really we were trying to find some way of facilitating this by at
least helping out on trying to quantify some of the simple things
that we do with echocardiography.


Dr Carolyn
Lam:               
I love that phrase, democratizing echo. And you're absolutely
right, if we could put it in the hands of non-experts and help
them interpret them, we could really lead to detecting disease
earlier, and so on and so forth. Wow. But everyone's wondering,
how in the world do you go about doing that?


Dr Rahul
Deo:                   
One of the things that's really been amazing in these last five
years or so is that the field of computer vision, so the field by
which computers are trained to mimic humans in terms of
visualizing, recognizing, identifying images, has really
advanced, and incredibly rapidly. And one of the reasons for that
is that the video game type of computing system, the same things
that go into Playstations and such, have resulted in much, much
more rapid computing. And that's allowed us to train more complex
models.


                                               
So that's one of the things that's changed, and also, it's just
much easier to get our hands-on training data. So machines can be
trained to do things, but they need lots of examples. And the
harder the task, the more examples they need. So the widespread
availability of digital data has made that easier, though I would
say that it wasn't that easy to get our hands on enough
echocardiography data to be able to train. But in general, almost
any task where there's enough data has been solved on the
computer vision side. So this has really been an exciting advance
in these last few years. So we thought we could very well just
used these same technologies on a clinical problem.


Dr Carolyn
Lam:               
Okay, but Rahul what are you talking about here? Like the
machine's actually going to recognize different views, or make
automated measurements? That's the cool thing, frankly, that
you've written about because we know that the machines can
already kind of do EF, ejection fraction, but you're talking
about something way bigger. So tell us about that.


Dr Rahul
Deo:                   
Yeah, so there are many cute examples in the popular press about
machines being able to recognize the differences between cats and
dogs, or some breeds of dogs. And so if you think about things
that way, it really shouldn't be that much more difficult to
imagine recognizing between different views, which probably are
much more dramatically different than different breeds of dogs.
So you could really just take the same models, or the same
approaches, give enough examples, label them, and then say figure
out what the differences are.


                                               
And I think one of the challenges with these systems is they're
often black boxes. They can't tell us exactly what it is that
they're using, but when it comes to something like recognizing
whether something is an apical four chamber view or a parasternal
long axis view, we actually don't care that much as to how it is
that the computer gets there. We just wanted them to do it
accurately, and that's one of the places for some of these
computer vision models. It's a field broadly called deep
learning, and it's just great at achieving complex tasks.


                                               
So, once you recognize views, then the other thing that computers
have been shown to be able to do is recognize specific objects
within an image. For example, you could give an entire football
field and you could find a single player within it. You could
recognize where the players are, where the ball is, where the
grass is. So computers can distinguish all those things too. And
then once you know where something is, you can trace it and you
can measure it. So in that sense it's very similar to what a
human reader would do, it's just broken down into individual
steps, and each one of those needs to be trained.


Dr Carolyn
Lam:               
You put that so simply so that everyone could understand that.
That's so cool. You mentioned, though, accuracy. I could imagine
that a machine would likely interpret one image the same way
again and again, and that addresses something that we really
struggle with in echo doesn't it? Because, frankly, one reader
against another, we always know. Ejection fraction has got a plus
minus seven or something, and then even within the same reader
you could read the same thing and say something one day, and say
something the other. So this is more than just automating it, is
it?


Dr Rahul
Deo:                   
Yeah, so it's certainly making it more consistent, and the other
thing that we were able to do, I mean once you can teach it to
identify and traces the contours of the heart in one image you
can have it do it in every single image within the video, and
every single video within the study. So now, I mean it's quite
painful. I know this from my own experience in terms of tracing
these things, so a typical reader can't trace 150, 200, 300, 500
different hearts, that's not going to happen. So instead, they'll
sort of sift through manually, pick one or two, and if there's
variability from one part of the study to the other, that really
won't be captured.


                                               
And in this case, the computer will very happily do exactly what
you ask it to do, which is to repeat the same thing again and
again and again, and then be able to average over that, capture
variability. So that's one of the tasks that is much more easy to
imagine, setting a computer who won't talk back to you and won't
resist and won't refuse to actually taking on the mundane aspect
of just getting many, many, many more measurements. And that
could happen not only in a single study, but also could happen
more frequently. So you could imagine that, again, there's just
not that resistance that's coming from having to have an
individual do these things.


Dr Carolyn
Lam:               
Oh, my goodness, and not only does he not ... well he, machine,
not say no, I mean they don't need to take time off or weekends
off. We could get immediate reports directly. Oh my goodness.
Victoria I have to bring you in on this. We knew as editors when
we found this paper that this is something we just have to
publish in Circulation that's going to be groundbreaking. Could
you tell us a little bit more about what you think the
implications of this is?


Victoria
Delgado:             
I think that this is a very important paper because it's a very
large study and it's sets, I would say, three important questions
that we deal every day in clinical practice. One is how to reduce
burden in very busy echo labs by facilitating the reporting of
the echoes and the interpretation of the echoes. Second: to have
an accurate measurement and quantification of the images that we
are acquiring, and third: this is recognition of the pattern.


                                               
And I think that this very important, particularly in primary
care because, for example in Europe here, echocardiography is not
really in the primary care and the patients are being referred to
secondary level hospitals or third level hospitals. That means
that the waiting days sometimes is too long. If we train the
general practitioners, for example, to do simple echocardiograms
with the handheld systems which are also the technologies that
are coming and are really available in your iPhone, for example,
on your phone, you can get an echocardiographic evaluation of a
patient that comes to a general practitioner.


                                               
And if you don't have too much knowledge on interpretation, these
tools that can have recognition of the pattern of the disease can
trace a red flag and say, okay this patient may have this disease
or may have this problem, you should consider sending or
referring this patient to us at Leiden Hospital where he's going
to have a regular check-up and a complete echocardiogram. That
could lead to less burden in very busy labs and only refer the
patients in a timely manner to the centers when they have to be
referred, when the others can wait of can be referred much later.


                                               
I think that that's important, and next two technologies that are
coming now and it will be very important, some groundbreaking
technologies. One is the handheld systems, the ones that you can
have in your phone, the ones that you can have in your tablet for
example. And the other one is going to be the artificial
intelligence to, if not diagnose completely, at least to
recognize the pattern that there is a pathology where we need to
focus, and we need to act earlier.


Dr Rahul
Deo:                   
I think that one place we would like to see this used is in a
primary care setting where you have individuals who have risk
factors that we know would be risk factors, for example, for
let's say heart failure with preserved ejection fraction. But
really, my experience in that phase of clinical practice is
there's a lot of resistance from patients to get on the
medications. So hypertension is, at that point, often, I just got
worked up because I had a hard time finding parking, and so on,
and so on, where there's just a natural resistance.


                                               
So if you could imagine having objective measures describing,
let's say how their left atrium is doing at that point, how it
looks the next year, what the change in therapy is doing, all
these things, you actually can bring in that quantification at a
low enough cost that makes it actually practical, then that would
be one place we could imagine motivating or intensifying
therapies on the basis of something like this.


                                               
And I think one area we have to admit we didn't solve is we
haven't solved the ability to facilitate getting the data in the
first place. We do know that there are these focused workshops
around trying to get some simple views, and more and more of our
internal medicine residents are able to get some of these, but we
can't dismiss that this is still an important challenge in terms
of being able to get the images. What we want to do is say, well
you can get some images and we can help you interpret them and
quantify in an effort to try to motivate therapies being
initiated or intensified in a way that's sometimes difficult to
do in the current system.


Dr Carolyn
Lam:               
So, Rahul and Victoria, you both mentioned that one of the key
aspects is the acquisition of the echo. Not just the machine that
does it, but also who takes the images that will then be
automatically analyzed. So, Rahul, do you think that sometimes
you're going to invent something that will replace even the
acquisition, or maybe even simplify it so that we may not need
Doppler anymore?


Dr Rahul
Deo:                   
One of the things that we thought about was, we wanted to limit
ourselves to views that might be easier to acquire, in part
because we wanted to reduce the complexity of the study and yet
still try to capture as much information as possible. And getting
back to the first part of your question, you could imagine that
recognizing a view is not that different from recognizing that a
view is 10 degrees off from where it should be. You could imagine
training a computer to do just that very same thing too. It could
recognize a slightly off axis apical four chamber view and guide
you into correctly positioning the probe, and you could even
imagine a robotic system that does this and just takes the person
out of it all together. In part because a very skilled
sonographer can quickly look at something and say, oh I just need
to tilt my wrist this way and move it this way. I was always
humbled by that because I never could quite do that myself.


                                               
But in the same way, and in the way, that's happening is that an
image is recognized, and then the reference image is held in
one's brain, and then they just know from experience what needs
to be done to turn one into the other. But that very well-oiled
machine could very well be taught to do that exact same thing
too.


Dr Carolyn
Lam:               
Oh wow. That is just totally amazing. I know the listeners are
being blown away by this just as I am. Let me just end by asking
for any last words, Victoria and Rahul, of the clinical
application of this. When are we going to have this primetime?
What do you think?


Victoria
Delgado:             
I think that this is coming. This is one, for example, of the
first studies showing the feasibility of this technology. In
terms of accuracy, probably we need improvement, but that depends
very much on the quality of the echocardiographic data that we
obtain. And in the future, I think that we are going to rely more
and more on this technology, and we will have the expert view for
those cases that are ambiguous or where the technology has
limitations. But in terms of accuracy, for example, I can imagine
one of the clinical scenarios that we face in everyday clinical
practice is the evaluation of the effect of the treatment in
heart failure patients for ejection fraction, and in patients,
for example, treated with chemotherapy to see changes in ejection
fraction.


                                               
That, if we do it manually as we do now, we know that we have
limitations in terms of the own viability of the observer. If you
leave it for artificial intelligence, maybe that viability may be
reduced, and you may be better in terms of adjusting the
medication if needed. Because you removed completely what would
be the individual viability. So these are the fields that
probably I see more and more application of this technology in
order to improve the reproducibility of the measurements and
accuracy. But yeah, for that we need probably very good image
quality, and I see in echocardiography we always tend to say,
yeah the image quality is not that good. I'm sure that
echocardiography can give you much more than just using through
the echocardiography. You can use contrast, you can use many
other techniques in order to improve the image quality. And
artificial intelligence, the better the image quality is,
probably the better it's going to be as well, the accuracy of the
measurements and the recognition of disease.


Dr Carolyn
Lam:               
Wow, and Rahul?


Dr Rahul
Deo:                   
I completely agree with Victoria. I think that we're going to
have to be clever about where we incorporate something like this
into the current clinical workflow. You have to choose your
problem carefully, you have to understand it. Any system like
this is going to make some mistakes. To figure out how to
minimize the impact of those mistakes, and at the same time add
benefit and potentially enable things that wouldn't even be done.
So I think that the fun stuff is yet to come here in terms of
really incorporating this in a way that can really change
clinical practice.


                                               
I want to add one thing that I really haven't mentioned. And we,
at this point, really just focused on trying to mimic the stuff
that we're already doing. Part of the motivation of this work is
to try to potentially see things that we can't even see right now
and try to potentially predict onset of disease or early latent
forms of something that would really be difficult to detect by
the human eye. And we've seen examples of that in some of the
other fields around radiology, and I think that's going to be a
place that would be augmenting beyond what we're even doing
currently.


                                               
But of course, the challenge is that the system has to be
interpretable enough that we understand what it is that it's
seeing, because otherwise I'm sure we'll be reluctant to embrace
something clinically that we don't understand.


Dr Carolyn
Lam:               
You've been listening to Circulation on the Run. Don't forget to
tune in again next week.


 

Weitere Episoden

Circulation July 29, 2025 Issue
27 Minuten
vor 5 Monaten
Circulation July 22, 2025 Issue
26 Minuten
vor 5 Monaten
Circulation July 15, 2025 Issue
35 Minuten
vor 5 Monaten
Circulation July 8, 2025 Issue
40 Minuten
vor 6 Monaten
Circulation June 30, 2025
27 Minuten
vor 6 Monaten

Kommentare (0)

Lade Inhalte...

Abonnenten

15
15