AM 2: Introduction to Deep Learning
Winter Semester 2017/2018
Dr. Sebastian Stober
Mon 14-16; Campus Golm, House 14, Room 0.09
Begins: Monday, October 16
“I have worked all my life in Machine Learning, and I’ve never seen one algorithm knock over benchmarks like Deep Learning.”
Andrew Ng (Stanford / Baidu)
Over the last decade, so-called “deep learning” techniques have become very popular in various application domains such as computer vision, automatic speech recognition, natural language processing, and bioinformatics where they produce state-of-the-art results on various challenging tasks. Most recently, deep neural networks even succeeded in mastering the game of go.
A crucial success factor of deep neural networks is their ability to learn hierarchies of increasingly complex features from raw input data. Such representations often outperform traditionally hand-crafted features that require expensive human effort and expertise. But they do not come as a free lunch. Designing and training deep neural networks such that they actually learn meaningful and useful feature representations of data is an art itself. Mastering it requires practice and experience - and this course offers many opportunities to obtain these.
This is an intense course following a “flipped classroom” design that builds on everybody’s active participation. The curriculum roughly follows Part II of the Deep Learning Book but also covers recently published advances in the field. You will be responsible to prepare for each class by reading selected literature or watching online video lectures and talks. Each student will further reflect on their learning progress in a brief weekly blog post. This can, for instance, include open questions, self-discovered answers or pointers to additional helpful material. Most of the in-class time will be used to discuss the open questions about the concepts under study. There will also be increasingly complex programming tasks where you can apply and practice what you just learned and develop new ideas. All implementations will be in Python using Tensorflow as base framework.
As final course project in January and February 2018, you will explore ideas to improve an existing automatic speech recognition system. This will be a collaborative team effort with a little competition about which team eventually obtains the best results. You will have access to professional-grade GPU compute servers and a dataset of approximately 1000 hours of transcribed audio recordings. Each team will document their progress in blog posts and describe their final solution in a short paper.
The course primarily addresses students within the Cognitive Systems Master but is also open to a general audience. As prerequisites for the course, you should already have basic knowledge on machine learning in general as well as linear algebra and probability theory. Those who would like a refresher prior to taking this class can find an excellent compilation in Part I of the Deep Learning Book. You also have to know already how to program in Python and of course you should bring some curiosity to embark on an open-ended and exploratory learning process.
The number of participants for this course is limited to 25. If you would like to attend this course, please send a short application email (200-500 words) introducing yourself, explaining your motivation for taking this course, and describing your prior experience. We will use this information to make a selection (if necessary).