101 - The lottery ticket hypothesis, with Jonathan Frankle
In this episode, Jonathan Frankle describes the l…
41 Minuten
Podcast
Podcaster
Beschreibung
vor 5 Jahren
In this episode, Jonathan Frankle describes the lottery ticket
hypothesis, a popular explanation of how over-parameterization
helps in training neural networks. We discuss pruning methods used
to uncover subnetworks (winning tickets) which were initialized in
a particularly effective way. We also discuss patterns observed in
pruned networks, stability of networks pruned at different time
steps and transferring uncovered subnetworks across tasks, among
other topics. A recent paper on the topic by Frankle and Carbin,
ICLR 2019: https://arxiv.org/abs/1803.03635 Jonathan Frankle’s
homepage: http://www.jfrankle.com/
hypothesis, a popular explanation of how over-parameterization
helps in training neural networks. We discuss pruning methods used
to uncover subnetworks (winning tickets) which were initialized in
a particularly effective way. We also discuss patterns observed in
pruned networks, stability of networks pruned at different time
steps and transferring uncovered subnetworks across tasks, among
other topics. A recent paper on the topic by Frankle and Carbin,
ICLR 2019: https://arxiv.org/abs/1803.03635 Jonathan Frankle’s
homepage: http://www.jfrankle.com/
Weitere Episoden
30 Minuten
vor 2 Jahren
51 Minuten
vor 2 Jahren
45 Minuten
vor 2 Jahren
48 Minuten
vor 2 Jahren
36 Minuten
vor 2 Jahren
In Podcasts werben
Kommentare (0)