Deep learning is rapidly changing the way machines look at the world. From predicting optimal decisions for autonomous vehicles, to finding underlying similarities in high-dimensional data, to better understanding natural language, deep learning’s potential is very promising. As a relatively young field, it is hard to know where to begin to become well-versed in deep learning theory and applications.
In this course, we hope to introduce many of the foundational concepts that are necessary to understand in order to implement state-of-the-art techniques, or even discover new ones! We’ll start by talking about modern practical deep networks and many of their intricacies. Afterwards, we’ll spend a few weeks diving into some more theoretical material that is currently the topic of much deep learning research. Finally, just for fun, we’ll cover some miscellaneous topics such as kernel methods and deep reinforcement learning.
Each week, we’ll spend about an hour in a lecture-based setting, and then switch gears towards an interactive reading group where we discuss and delve into recent or important deep learning academic papers.
|Lecture||James Bartlett, Jordan Prosky, Quinn Tran, Michael Zhang, Neel Kant||0||Wurster 102||[W] 5:00PM-7:00PM||09/06/2017||