<p><em>Proudly sponsored by </em><a href="https://www.pymc-labs.io/" rel="noopener noreferrer" target="_blank"><em>PyMC Labs</em></a><em>, the Bayesian Consultancy. </em><a href="https://calendar.google.com/calendar/appointments/schedules/AcZssZ1nOI_SElJzSiQ2sXBDiaW9w98ErjnHVzmHcSilYNWeXxJgV870NGuWZUGo3W-8-gDG8jIXQhBf" rel="noopener noreferrer" target="_blank"><em>Book a call</em></a><em>, or </em><a href="mailto:
[email protected]" rel="noopener noreferrer" target="_blank"><em>get in touch</em></a><em>!</em></p><ul><li><a href="https://topmate.io/alex_andorra/503302" rel="noopener noreferrer" target="_blank">Intro to Bayes Course</a> (first 2 lessons free)</li><li><a href="https://topmate.io/alex_andorra/1011122" rel="noopener noreferrer" target="_blank">Advanced Regression Course</a> (first 2 lessons free)</li></ul><br/><p><em>Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his </em><a href="https://bababrinkman.com/" rel="noopener noreferrer" target="_blank"><em>awesome work</em></a><em>!</em></p><p>Visit our <a href="https://www.patreon.com/learnbayesstats" rel="noopener noreferrer" target="_blank">Patreon page</a> to unlock exclusive Bayesian swag ;)</p><p><strong>Takeaways:</strong></p><ul><li class="ql-align-justify">DADVI is a new approach to variational inference that aims to improve speed and accuracy.</li><li class="ql-align-justify">DADVI allows for faster Bayesian inference without sacrificing model flexibility.</li><li class="ql-align-justify">Linear response can help recover covariance estimates from mean estimates.</li><li class="ql-align-justify">DADVI performs well in mixed models and hierarchical structures.</li><li class="ql-align-justify">Normalizing flows present an interesting avenue for enhancing variational inference.</li><li class="ql-align-justify">DADVI can handle large datasets effectively, improving predictive performance.</li><li class="ql-align-justify">Future enhancements for DADVI may include GPU support and linear response integration.</li></ul><br/><p class="ql-align-justify"><strong>Chapters</strong>:</p><p class="ql-align-justify">13:17 Understanding DADVI: A New Approach</p><p class="ql-align-justify">21:54 Mean Field Variational Inference Explained</p><p class="ql-align-justify">26:38 Linear Response and Covariance Estimation</p><p class="ql-align-justify">31:21 Deterministic vs Stochastic Optimization in DADVI</p><p class="ql-align-justify">35:00 Understanding DADVI and Its Optimization Landscape</p><p class="ql-align-justify">37:59 Theoretical Insights and Practical Applications of DADVI</p><p class="ql-align-justify">42:12 Comparative Performance of DADVI in Real Applications</p><p class="ql-align-justify">45:03 Challenges and Effectiveness of DADVI in Various Models</p><p class="ql-align-justify">48:51 Exploring Future Directions for Variational Inference</p><p class="ql-align-justify">53:04 Final Thoughts and Advice for Practitioners</p><p><strong>Thank you to my Patrons for making this episode possible!</strong></p><p><em>Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël...