Ferdowsi University of Mashhad, Fall 2023 (1402 SH)
Welcome to Deep Learning Course!
Lectures
Getting Started
This course leverages a variety of resources to impart the concepts and practical applications of deep learning. Some of these resources are highlighted on the About Page. The primary reference is Deep Learning with Python, authored by François Chollet in its second edition (2021).
1402/07/12
- Book
- Deep Learning with Python, by François Chollet, Second Edition, 2021
- Jupyter notebooks
- Intro to Tensors, Auto Grad & Gradient Descent
- Programming
Further Reading
TensorFlow vs PyTorch: TensorFlow vs PyTorch: Deep Learning Frameworks, 2023
Learn PyTorch: A website that provides tutorials and examples on how to use PyTorch for various deep learning tasks.
Autograd, Continue ...
1402/07/16
- Previous Codes
- Using autograd to Solve a Math Puzzle
Colabchapter02_mathematical-building-blocks
Further Reading
Gradient Descent and its variants
1402/07/23
- Gradient Descent - until 12.3.3.2. Convergence Analysis
Matrix Derivative - Proposition 5, Proposition 7 & 8
- What worries me about AI
1402/07/26
Class Canceled
Gradient Descent and its variants, continue ...
1402/07/30
Paper The Application of Taylor Expansion in Reducing the Size of Convolutional Neural Networks for Classifying Impressionism and Miniature Style Paintings (کاربرد بسط تیلور در کاهش حجم شبکه های عصبی پیچشی برای طبقه بندی نقاشی های سبک امپرسیونیسم و مینیاتور )
HW1 Real Time Sudoku Solver, due: 1402/08/04, The deadline for the assignment submission has been extended to 1402/08/12 with a 10 percent penalty for each day of delay from 08/04.
HW2 Generate Persian Sudoku, due: 1402/08/08 Extended until 1402/08/13
Further Reading
1402/08/07
- Momentum
- Momentum, Until 12.6.1.4
- Chapter 2: The mathematical building blocks of neural networks
Further Reading
1402/08/10
Class Canceled
1402/08/14
HW3 Modify the code of Chap. 3, due: 1402/08/20
HW4 Using autograd to Solve a Math Puzzle, due: 1402/08/23
1402/08/21
- Chapter 4: Getting started with neural networks: Classification and regression
- Entropy (information theory)
- Binary Cross Entropy/Log Loss for Binary Classification
- CE and Acc
Further Reading
- The Application of Taylor Expansion in Reducing the Size of Convolutional Neural Networks for Classifying Impressionism and Miniature Style Paintings
- Sreenivas Bhattiprolu, python for microscopists, Many examples of TF
- TF-IDF in NLP
1402/08/24
Further Reading
1402/08/28
HW5 Real Time Sudoku Solver with Persian Digits, due: 1402/09/04
HW6 N-Rook in Chess with auto-grad, due: 1402/09/09
Paper Fully Connected to Fully Convolutional: Road to Yesterday(تمام متصل به تمام پیچشی: پلی به گذشته)
- Sigmoid, Softmax
- Classification, Object Detection and Image Segmentation
- What is the difference between object detection, semantic segmentation and localization?
- Difference Between Face Detection and Face Recognition
Further Reading
- Minibatch Stochastic Gradient Descent
- Minibatch Stochastic Gradient Descent on Apple Leaves in FC2FC
- Softmax function (Wiki)
1402/09/05
Convolutional Neural Networks
1402/09/08, 12, 19, 22
- Slides
- Classical Convolutional Neural Networks
- Chapter 8 of Deep Learning with Python: Introduction to deep learning for computer vision
ColabPlay with Convs & Filters
ColabImage Filtering
ColabLarge-Scale Constrained Linear Least-Squares
ColabChapter 8: Introduction to deep learning for computer vision
HW7 1D Convolution, due: 1402/09/26
HW8 2D Convolution, due: 1402/10/02
Further Reading
- Convolutional Neural Networks
- Networks Using Blocks (VGG)
- What is translation invariance in computer vision and convolutional neural network?
- Understanding Normalisation Methods In Deep Learning
- Batch Normalization
Paper Deep Image Deblurring: A Survey
Paper Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be
Generative Adversarial Networks
1402/10/03
Further Reading
- StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks Wiki
- Generative Adversarial Networks (d2l)
1402/10/10
- Midterm
Variational Autoencoders
1402/10/17
Further Reading
- Autoencoder in TensorFlow 2: Beginner’s Guide
- Principal Component Analysis explained visually
- Intuitively Understanding the KL Divergence
A Gentle Introduction to Cross-Entropy (and KL Divergence) for Machine Learning
- Transposed Convolution (In Persian)
- Transposed Conv as Matrix Multiplication explained, Medium
- Transpose Convolutions and Autoencoders, CS, Toronto
- Transposed Convolution, Coursera, Andrew Ng
- A guide to convolution arithmetic for deep learning, Montreal
Computer Vision
Winter 1402
Whether it is medical diagnosis, self-driving vehicles, camera monitoring, or smart filters, many applications in the field of computer vision are closely related to our current and future lives
- 2
- Image Augmentation
- 3
- Fine-Tuning
- 4
- Object Detection and Bounding Boxes
- 5
- Anchor Boxes
- 6
- Multiscale Object Detection
- 7
- The Object Detection Dataset
- How to Find Wally with a Neural Network?
- Use GAN to produce a chaotic scenes including Wally
1402/11/04
- Final Exam
Computer Vision, Advanced topics I
Computer Vision, Advanced topics II
Working with Sequences
Physics Informed Neural Network
- Physics Informed Neural Network for Computer Vision and Medical Imaging
- Other papers about PINNs
- Implementation of PINNs in TensorFlow 2
- TensorFlow 2.0 implementation of Maziar Raissi’s Physics Informed Neural Networks (PINNs)
- Physics-Informed Computer Vision: A Review and Perspectives
- Physics-Informed Machine Learning for Computational Imaging, PhD Thesis
Projects & Seminars
DeepMid
- FunSearch: Making new discoveries in mathematical sciences using Large Language Models
- MuZero, AlphaZero, and AlphaDev: Optimizing computer systems
Gaussian Processes
PINNs & GANs
- PID-GAN: A GAN Framework based on a Physics-informed Discriminator for Uncertainty Quantification with Physics
Revisiting PINNs: Generative Adversarial Physics-informed Neural Networks and Point-weighting Methol
- Modify one of the following codes to work with Persian digits:
- Image to LaTeX using CNN & RNN
- im2latex tensorflow implementation
- Differentiable Convex Optimization Layers
- Chapter 12 of d2i: Optimization Algorithms
Projects & Seminars, Continue ...
Neural tangent kernel
- Wiki
- Master Thesis: An Empirical Analysis of the Laplace and Neural Tangent Kernels
- Some Math behind Neural Tangent Kernel
- Paper: Neural Tangent Kernel: Convergence and Generalization in Neural Networks
- Paper: Fast Graph Condensation with Structure-based Neural Tangent Kernel
- Paper: Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
- When are Neural Networks more powerful than Neural Tangent Kernels?
Theoretical Foundations for Deep Learning
- 1
- 2
- 3
- 4
- 5
- 6
- Engineering Optimization: Theory and Applications, by S.S. Rao, 1978
Miscellaneous
- Combinatorial optimization with physics-inspired graph neural networks
- Combinatorial Optimization with Graph Neural Networks
- Reply to: Inability of a graph neural network heuristic to outperform greedy algorithms in solving combinatorial optimization problems
- Reply to: Modern graph neural networks do worse than classical greedy algorithms in solving combinatorial optimization problems like maximum independent set
- Exact Combinatorial Optimization with Graph Convolutional Neural Networks
- Github Exact Combinatorial Optimization with Graph Convolutional Neural Networks: Set Covering, Capacitated Facility Location, Maximum Independent Set
- Ecole: a library of Extensible Combinatorial Optimization Learning Environments
- Bengio, Yoshua, Andrea Lodi, and Antoine Prouvost. “Machine learning for combinatorial optimization: a methodological tour d’horizon.” European Journal of Operational Research. 2020
- Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets
- COMBHELPER: A Neural Approach to Reduce Search Space for Graph Combinatorial Problems
Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization
- Discovering novel algorithms with AlphaTensor
- Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks