CSC 4651 - Deep Learning in Signal Processing

2 lecture hours 2 lab hours 3 credits
Course Description
This elective course provides an overview of deep learning methods and models as used in digital signal processing (DSP), including key DSP concepts that appear in and adjacent to such models in both real-time and off-line applications. The course begins with basics of creating and evaluating deep learning models and then proceeds to alternate between DSP and deep learning topics. Deep learning structures such as convolutional layers, recurrent networks, dropout, and autoencoders are covered. DSP topics including frequency response, the role of convolution, and spectrograms are covered with an emphasis on how they support deep learning models. Examples of audio and image applications (time and space as independent variables of a potentially multidimensional sampled signal) are included throughout, supporting further work in video processing, medical image processing, etc. A variety of current models are studied throughout the term. Topics of student interest are addressed by special lecture topics and course projects. Laboratory exercises include several weeks of guided exercises and culminate with a term project. (prereq: (MTH 2340  or MTH 2130 ) and (ELE 3320  or CSC 3310  or CSC 2621 )) (quarter system prereq: (MA 383 or BE 2200) and (CS 2040 or CS 3210 or EE 3221))
Course Learning Outcomes
Upon successful completion of this course, the student will be able to:
  • Describe state of the art deep learning structures for addressing signal processing problems
  • Articulate the calculations done during backpropagation and explain various challenges that may occur (exploding gradients, uncontrollable parameters, etc.)
  • Evaluate proposed solutions and interpret published results using standard metrics such as accuracy, precision, recall, top-N accuracy, etc.
  • Break down modern deep learning architectures into their components and characterize the function of those components
  • Modify existing network structures and evaluate the impact on model performance for new applications
  • Appraise a data set in terms of quantity, quality, and balance, and suggest appropriate mitigations of data set issues
  • Apply the spectrogram a a foundation for signal detection, classification, and enhancement
  • Differentiate between commonly used objective and subjective metrics and critique their use in backpropagation and model evaluation
  • Relate a signal processing view of convolution to a deep learning view through the concepts of time-invariance, linearity, and feature extraction

Prerequisites by Topic
  • Linear algebra and/or multivariable calculus
  • Basic software design, ideally in the context of data science, numeric applications, or AI

Course Topics
  • Training pipelines
  • Loss functions including perceptual losses
  • Confusion matrices and performance metrics
  • Convolutional layers of various dimensions used on both time series and time-frequency representations of data
  • Mitigation of overfitting including dropout and batchnorm
  • Principal components analysis and autoencoders
  • Various types of RNNs
  • Common network architectures
  • Frequency response
  • Discrete Fourier transforms
  • Spectrograms, windowing, and perfect reconstruction

Laboratory Topics
  • Basic DNN training
  • Signal representation/transfer learning
  • Model pruning
  • Hyperparameter optimization
  • Project
  • Presentations

Coordinator
Dr. Eric Durant


Print-Friendly Page (opens a new window)