Episode 2: A Conversation with George Smith and Daniel Dennett

This episode of In Context features George Smith and Daniel Dennett, both professors in the department of philosophy at Tufts University. George Smith focuses on philosophy of science and logic, with particular focus on transforming data into evidence. Co-editor of the Cambridge Companion to Newton, he's an expert on how Isaac Newton transformed notions of high-quality evidence. Daniel Dennett focuses on philosophy of mind and cognitive science, and has written multiple books on consciousness. 

Throughout this conversation, George and Dan share their thoughts on how theories turn data into evidence, what contributions Descartes, Newton, Hume, and Darwin made to the history of philosophy, what consciousness is and isn't, why we fall prey to anthropomorphizing machines, and how we should engage with AI systems.

Episode 1: A Conversation with Rich Sutton

 

 

This episode of In Context features Rich Sutton, a pioneer in the field of reinforcement learning, a computational approach to learning where an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. A professor in the department of Computing Science at the University of Alberta and advisor for RBC and DeepMind, Professor Sutton has devoted much of his life to AI research. His 1998 textbook, Reinforcement Learning: An Introduction, is widely used today. 

Throughout this conversation, Rich shares his insights on the history of reinforcement learning, current developments in temporal difference learning (a subfield of reinforcement learning), the fundamental difference between reinforcement learning and supervised learning, the importance of intuition and critical thinking in scientific research, and the parallels between learning algorithms and the big choices we make in life.