Large Scale Machine Learning - UToronto - STA 4273H Winter 2015

folder large_scale_machine_learning_utoronto_2015 (18 files)
filelecture1.ogv 390.00MB
fileLecture1_2015.pdf 8.72MB
filelecture2.ogv 292.45MB
fileLecture2_2015.pdf 4.40MB
filelecture3.ogv 314.65MB
fileLecture3_2015.pdf 4.94MB
filelecture4.ogv 331.26MB
fileLecture4_2015.pdf 14.39MB
filelecture5.ogv 342.56MB
fileLecture5_2015.pdf 7.78MB
filelecture6.ogv 375.90MB
fileLecture6_2015.pdf 6.71MB
filelecture7.ogv 330.15MB
fileLecture7_2015.pdf 3.75MB
filelecture8.ogv 352.06MB
fileLecture8_2015.pdf 10.97MB
filelecture9.ogv 345.42MB
fileLecture9_2015.pdf 9.64MB
Type: Course
Tags:

Bibtex:
@article{,
title= {Large Scale Machine Learning - UToronto - STA 4273H Winter 2015},
keywords= {},
journal= {},
author= {},
year= {2015},
url= {http://www.cs.toronto.edu/~rsalakhu/STA4273_2015/lectures.html},
license= {},
abstract= {Lecture 1 -- Machine Learning:
Introduction to Machine Learning, Linear Models for Regression
Reading: Bishop, Chapter 1: sec. 1.1 - 1.5. and Chapter 3: sec. 1.1 - 1.3. 
Optional: Bishop, Chapter 2: Backgorund material; 
Hastie, Tibshirani, Friedman, Chapters 2 and 3.

Lecture 2 -- Bayesian Framework:
Bayesian Linear Regression, Evidence Maximization. Linear Models for Classification.
Reading: Bishop, Chapter 3: sec. 3.3 - 3.5. Chapter 4. 
Optional: Radford Neal's NIPS tutorial on Bayesian Methods for Machine Learning:. Also see Max Welling's notes on Fisher Linear Discriminant Analysis

Lecture 3 -- Classification 
Linear Models for Classification, Generative and Discriminative approaches, Laplace Approximation.
Reading: Bishop, Chapter 4. 
Optional: Hastie, Tibshirani, Friedman, Chapter 4. 

Lecture 4 -- Graphical Models: 
Bayesian Networks, Markov Random Fields
Reading: Bishop, Chapter 8. 
Optional: Hastie, Tibshirani, Friedman, Chapter 17 (Undirected Graphical Models). 
MacKay, Chapter 21 (Bayesian nets) and Chapter 43 (Boltzmann mchines). 
Also see this paper on Graphical models, exponential families, and variational inference by M. Wainwright and M. Jordan, Foundations and Trends in Machine Learning

Lecture 5 -- Mixture Models and EM: 
Mixture of Gaussians, Generalized EM, Variational Bound.
Reading: Bishop, Chapter 9. 
Optional: Hastie, Tibshirani, Friedman, Chapter 13 (Prototype Methods). 
MacKay, Chapter 22 (Maximum Likelihood and Clustering).

Lecture 6 -- Variational Inference 
Mean-Field, Bayesian Mixture models, Variational Bound.
Reading: Bishop, Chapter 10. 
Optional: MacKay, Chapter 33 (Variational Inference).

Lecture 7 - Sampling Methods 
Rejection Sampling, Importance sampling, M-H and Gibbs.
Reading: Bishop, Chapter 11. 
Optional: MacKay, Chapter 29 (Monte Carlo Methods).

Lecture 8 -- Continuous Latent Variable Models 
PCA, FA, ICA, Deep Autoencders 
Reading: Bishop, Chapter 12. 
Optional: Hastie, Tibshirani, Friedman, Chapters 14.5, 14.7, 14.9 (PCA, ICA, nonlinear dimensionality reduction). 
MacKay, Chapter 34 (Latent Variable Models).

Lecture 9 -- Modeling Sequential Data 
HMMs, LDS, Particle Filters.
Reading: Bishop, Chapter 13. },
superseded= {},
terms= {}
}


Send Feedback