<p>What is a large language model (LLM), actually? How do these systems work? Why can they feel like human conversation partners, and why is that perception misleading? Mat and Phil open the podcast by answering these questions, and discussing the key implications for language teaching and learning.</p><p>In this episode:</p><ul><li>Phil Hubbard:<a target="_blank" rel="noopener noreferrer nofollow" href="https://web.stanford.edu/~efs/phil/"> https://web.stanford.edu/~efs/phil/</a></li><li>Mat Schulze:<a target="_blank" rel="noopener noreferrer nofollow" href="https://pantarhei.press/mat/"> </a><a target="_blank" rel="noopener noreferrer nofollow" href="https://PantaRhei.press/mat">https://PantaRhei.press/mat</a></li><li>Emily Bender:<a target="_blank" rel="noopener noreferrer nofollow" href="https://faculty.washington.edu/ebender/"> https://faculty.washington.edu/ebender/</a></li><li>Open-access position paper on <strong><em>Sustained Integrated Professional Development for GenAI (GenAI-SIPD)</em></strong>:<a target="_blank" rel="noopener noreferrer nofollow" href="https://www.igi-global.com/gateway/article/378304"> https://www.igi-global.com/gateway/article/378304</a></li></ul>