Large language models, like ChatGPT and Claude, have remarkably coherent communication skills. Yet, what this says about their “intelligence” isn’t clear.  Is it possible that they could arrive at the same level of intelligence as humans without taking the same evolutionary or learning path to get there?  Or, if they’re not on a path to human-level intelligence, where are they now and where will they end up? In this episode, with guests Tomer Ullman and Murray Shanahan, we look at how large language models function and examine differing views on how sophisticated they are and where they might be going.

COMPLEXITY

[email protected] (Santa Fe Institute)

Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?

OCT 23, 202445 MIN
COMPLEXITY

Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?

OCT 23, 202445 MIN

Description

Guests: Tomer Ullman, Assistant Professor, Department of Psychology, Harvard UniversityMurray Shanahan, Professor of Cognitive Robotics, Department of Computing, Imperial College London; Principal Research Scientist, Google DeepMindHosts: Abha Eli Phoboo & Melanie MitchellProducer: Katherine MoncurePodcast theme music by: Mitch MignanoFollow us on:Twitter • YouTube • Facebook • Instagram • LinkedIn  • BlueskyMore info:Tutorial: Fundamentals of Machine LearningLecture: Artificial IntelligenceSFI programs: EducationBooks: Artificial Intelligence: A Guide for Thinking Humans by Melanie MitchellThe Technological Singularity by Murray ShanahanEmbodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds by Murray ShanahanSolving the Frame Problem by Murray ShanahanSearch, Inference and Dependencies in Artificial Intelligence by Murray Shanahan and Richard SouthwickTalks: The Future of Artificial Intelligence by Melanie MitchellArtificial intelligence: A brief introduction to AI by Murray ShanahanPapers & Articles:“A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” in New York Times (Feb 16, 2023)“Bayesian Models of Conceptual Development: Learning as Building Models of the World,” in Annual Review of Developmental Psychology Volume 2 (Oct 26, 2020), doi.org/10.1146/annurev-devpsych-121318-084833“Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models,” in Findings of the Association for Computational Linguistics (December 2023), doi.org/10.18653/v1/2023.findings-emnlp.264“Role play with large language models,” in Nature (Nov 8, 2023), doi.org/10.1038/s41586-023-06647-8“Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks,” arXiv (v5, March 14, 2023), doi.org/10.48550/arXiv.2302.08399“Talking about Large Language Models,” in Communications of the ACM (Feb 12, 2024), “Simulacra as Conscious Exotica,” in arXiv (v2, July 11, 2024), doi.org/10.48550/arXiv.2402.12422