AI and Deep Learning – the Chips Changing the World

With Dr. Norman Rubin – Principal Research Scientist (retired), NVIDIA

This talk provides an introduction to Machine Learning, an overview of the hardware companies trying to innovate in the area, and a deep dive into the mainstream chips used in AI, ML and Deep Learning. Machine learning and, in particular, deep learning have recently revolutionized many domains including vision, speech, and many others. The computation within deep learning is mostly low precision linear algebra. Because ML addresses many important problems while using limited kinds of computation, it benefits from domain specific architectures such as GPUs or custom chips. The discussion with Norm will also include a description of how computer architecture is changing to accommodate neural nets, and solving problems of increased complexity. He will also discuss the difference between “”training”” and “”inference”” chips – the major players in both verticals (large companies and startups) – concluding with an industry investment analysis.

About the Speaker

Dr. Norm Rubin is an expert in compilers, system software, GPUs, and other devices. He culminated a 35 year career in programming and microprocessors by spending the last 6 years as a Principal Research scientist at NVIDIA working on the design of specialized tools for deep learning. Before that he was a fellow at AMD for 11 years, where he was the architect and lead implementer for the widely used GPU compilers at AMD/ATI, including both compute and graphics. Dr. Rubin holds multiple patents in computer architecture and compilers, one of which is currently shipped on millions of machines including cell phones, consoles, and PCs. He has authored many peer-reviewed articles and has spoken at numerous conferences. Norm holds a PhD from the Courant Inst of NYU. He is also known for his work on binary translators and dynamic optimizers.

Schedule a Consultation with Norm

Notify me about future Lynk Live Webinars