<p>Artificial Intelligence chatbots have come such a long way in a really short time.</p><p>Each release of ChatGPT brings new features, like voice chat, along with updates to the training data fed into the systems, supposed to make them smarter. </p><p>But are more leaps forward a sure thing? Or could the tools actually get dumber?</p><p>Today, Aaron Snoswell from the generative AI lab at the Queensland University of Technology discusses the limitations of large language models like ChatGPT. </p><p>He explains why some observers fear ‘model collapse’, where more mistakes creep in as the systems start ‘inbreeding’, or consuming more AI created content than original human created works. </p><p>Aaron Snoswell says these models are essentially pattern matching machines, which can lead to surprising failures. </p><p>He also discusses the massive amounts of data required to train these models and the creative ways companies are sourcing this data. </p><p>The AI expert also touches on the concept of artificial general intelligence and the challenges in achieving it. </p><p>Featured: </p><p>Aaron Snoswell, senior research fellow at the generative AI lab at the Queensland University of Technology</p><p>Key Topics:</p><ul><li>Artificial Intelligence</li><li>ChatGPT</li><li>Large Language Models</li><li>Model Collapse</li><li>AI Training Data</li><li>Artificial General Intelligence</li><li>Responsible AI Development</li><li>Generative AI</li></ul>