This episode features a conversation with machine learning researchers Graham Taylor (University of Guelph) and David Duvenaud (University of Toronto). We discuss how deep learning enables us to exploit the creative potential of framing tasks and phenomena as optimization problems. We cover a broad set of examples, like machine creativity, automating the design of neural network architecture, variational inference (that is, finding a good proxy representation of a tricky data set to make it usable for machine learning), and the mathematical structure behind making hard choices.

In the podcast, you’ll learn about:

  • Different types of machine creativity

  • Why deep learning was a conceptual breakthrough

  • What variational inference is and why it’s important

  • The latest research in automating the design of deep learning architecture

  • What Geoffrey Hinton and mathematician Srinivasa Ramanujan have in common

About Graham and David

Mentioned in the Interview