Are we currently experiencing a deep transformation in the concept of software unlike anything we’ve seen in the past 30 years? Peter Wang, the Co-Founder and CTO of Anaconda, the world’s most popular Python data science platform, thinks so. In this episode of In Context, he joins Kathryn Hume for a discussion of data-centric computing and how the data and operations that enable software are moving closer together.
Could something like measurement, which seems boring on the surface, actually be the secret to getting value from machine learning? Measurement isn’t always what you think it is. Great product developers and applied machine learning practitioners are finding creative ways to incorporate it as part of their feedback loops and machine learning products. In this episode, Kathryn Hume talks with Danny Lange, the VP of and Machine Learning at Unity Technologies. In addition to discussing the role of measurement in putting reinforcement learning systems into production, they also dive into why gaming is an excellent simulation ground for the some of the most exciting developments in applied machine learning and reinforcement learning research.
In this episode of the In Context podcast, we welcome Len D’Avolio. Len is Co-Founder and CEO of Cyft, a Boston-based machine learning startup that turns healthcare organizations into learning healthcare systems. In his conversation with Kathryn Hume, they talk about what does and doesn’t work in helping companies shift from a deterministic to a probabilistic operating model through the lens of the healthcare industry.
In our latest episode of the In Context Podcast, we're joined by Eric Colson, the Chief Algorithms Officer for Stitch Fix, an online subcription personal shopping service whose tech blog, Multithreaded, may have the coolest algorithms tour on the internet. In a fascinating discussion, he and Kathryn Hume analyze how Stitch Fix's engineering culture works, including what they value, what they look for in new hires, and how they’ve architected their platform to enable astounding success. You'll also hear about the critical role that autonomy plays in how Eric organizes his data science teams.
In this episode of the In Context podcast, we welcome Susan Etlinger, an industry analyst at Altimeter Group who focuses on data, conversational business, and ethics in the age of artificial intelligence. In their conversation, she and Kathryn Hume look at how AI is unlocking the promise of design thinking to enable amazing customer experiences, how the rapidly evolving technology landscape is changing consumer expectations around trust, and why enterprises should view ethics as a competitive differentiator rather than merely a compliance exercise. Find out how you can break these issues down and start thinking about them clearly for your business.
In our latest episode of the In Context Podcast, we welcome Clare Corthell, the Founder of Luminant Data, and Sarah Catanzaro, a Principal at Amplify Partners. Join us for a fascinating discussion about data products and where value and work occurs in machine learning pipelines. Find out about the competitive concerns that many companies have as they think about disclosing their data to third parties and the challenges and opportunities companies face as they consider entering this highly competitive space.
This episode of the In Context Podcast features a conversation with artificial intelligence researchers Randy Goebel and Osmar Zaiane, both professors at the Alberta Machine Intelligence Institute (AMII). We discuss the limitations of deep supervised learning and discuss alternatives to make it easier to understand, explain, and teach machine learning systems.
This episode features a conversation with machine learning researchers Graham Taylor (University of Guelph) and David Duvenaud (University of Toronto). We discuss how deep learning enables us to exploit the creative potential of framing tasks and phenomena as optimization problems. We cover a broad set of examples, like machine creativity, automating the design of neural network architecture, variational inference (that is, finding a good proxy representation of a tricky data set to make it usable for machine learning), and the mathematical structure behind making hard choices.