Caltech CS156 - Machine Learning - Yaser
Yaser Abu-Mostafa

folder Yaser-MachineLearning-CS156-Caltech (54 files)
filehomework/final.pdf 177.64kB
filehomework/final_sol.pdf 71.45kB
filehomework/hw1.pdf 152.14kB
filehomework/hw1_sol.pdf 72.12kB
filehomework/hw2.pdf 144.28kB
filehomework/hw2_sol.pdf 72.14kB
filehomework/hw3.pdf 158.14kB
filehomework/hw3_sol.pdf 72.18kB
filehomework/hw4.pdf 174.84kB
filehomework/hw4_sol.pdf 72.14kB
filehomework/hw5.pdf 195.97kB
filehomework/hw5_sol.pdf 72.04kB
filehomework/hw6.pdf 154.47kB
filehomework/hw6_sol.pdf 72.16kB
filehomework/hw7.pdf 191.81kB
filehomework/hw7_sol.pdf 72.14kB
filehomework/hw8.pdf 154.84kB
filehomework/hw8_sol.pdf 72.15kB
filelectures/Lecture 01 - The Learning Problem-mbyG85GZ0PI.mp4 204.69MB
filelectures/Lecture 02 - Is Learning Feasible-MEG35RDD7RA.mp4 194.43MB
filelectures/Lecture 03 - The Linear Model I-FIbVs5GbBlQ.mp4 181.36MB
filelectures/Lecture 04 - Error and Noise-L_0efNkdGMc.mp4 150.73MB
filelectures/Lecture 05 - Training Versus Testing-SEYAnnLazMU.mp4 193.68MB
filelectures/Lecture 06 - Theory of Generalization-6FWRijsmLtE.mp4 179.88MB
filelectures/Lecture 07 - The VC Dimension-Dc0sr0kdBVI.mp4 168.73MB
filelectures/Lecture 08 - Bias-Variance Tradeoff-zrEyxfl2-a8.mp4 180.29MB
filelectures/Lecture 09 - The Linear Model II-qSTHZvN8hzs.mp4 212.67MB
filelectures/Lecture 10 - Neural Networks-Ih5Mr93E-2c.mp4 181.29MB
filelectures/Lecture 11 - Overfitting-EQWr3GGCdzw.mp4 197.44MB
filelectures/Lecture 12 - Regularization-I-VfYXzC5ro.mp4 175.37MB
filelectures/Lecture 13 - Validation-o7zzaKd0Lkk.mp4 213.24MB
filelectures/Lecture 14 - Support Vector Machines-eHsErlPJWUU.mp4 177.12MB
filelectures/Lecture 15 - Kernel Methods-XUj5JbQihlU.mp4 192.07MB
filelectures/Lecture 16 - Radial Basis Functions-O8CfrnOPtLc.mp4 191.65MB
filelectures/Lecture 17 - Three Learning Principles-EZBUDG12Nr0.mp4 170.08MB
filelectures/Lecture 18 - Epilogue-ihLwJPHkMRY.mp4 180.87MB
fileslides/slides01.pdf 306.77kB
fileslides/slides02.pdf 533.86kB
fileslides/slides03.pdf 1.12MB
fileslides/slides04.pdf 1.02MB
fileslides/slides05.pdf 517.01kB
fileslides/slides06.pdf 309.56kB
fileslides/slides07.pdf 426.17kB
fileslides/slides08.pdf 381.68kB
fileslides/slides09.pdf 878.12kB
fileslides/slides10.pdf 408.37kB
fileslides/slides11.pdf 769.88kB
fileslides/slides12.pdf 651.92kB
fileslides/slides13.pdf 764.88kB
Too many files! Click here to view them all.
Type: Course

title= {Caltech CS156 - Machine Learning - Yaser},
journal= {},
author= {Yaser Abu-Mostafa},
year= {2012},
url= {},
license= {CC BY-NC-ND},
abstract= {##Outline
This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion:

* What is learning?
* Can a machine learn?
* How to do it?
* How to do it well?
* Take-home lessons.

Lecture 01 - The Learning Problem - Introduction; supervised, unsupervised, and reinforcement learning. Components of the learning problem.

Lecture 02 - Is Learning Feasible? Can we generalize from a limited sample to the entire space? Relationship between in-sample and out-of-sample.

Lecture 03 - The Linear Model I - Linear classification and linear regression. Extending linear models through nonlinear transforms.

Lecture 04 - Error and Noise - The principled choice of error measures. What happens when the target we want to learn is noisy.

Lecture 05 - Training versus Testing - The difference between training and testing in mathematical terms. What makes a learning model able to generalize?

Lecture 06 - Theory of Generalization - How an infinite model can learn from a finite sample. The most important theoretical result in machine learning.

Lecture 07 - The VC Dimension - A measure of what it takes a model to learn. Relationship to the number of parameters and degrees of freedom.

Lecture 08 - Bias-Variance Tradeoff - Breaking down the learning performance into competing quantities. The learning curves.

Lecture 09 - The Linear Model II - More about linear models. Logistic regression, maximum likelihood, and gradient descent.

Lecture 10 - Neural Networks - A biologically inspired model. The efficient backpropagation learning algorithm. Hidden layers.

Lecture 11 - Overfitting - Fitting the data too well; fitting the noise. Deterministic noise versus stochastic noise.

Lecture 12 - Regularization - Putting the brakes on fitting the noise. Hard and soft constraints. Augmented error and weight decay.

Lecture 13 - Validation - Taking a peek out of sample. Model selection and data contamination. Cross validation.

Lecture 14 - Support Vector Machines - One of the most successful learning algorithms; getting a complex model at the price of a simple one.

Lecture 15 - Kernel Methods - Extending SVM to infinite-dimensional spaces using the kernel trick, and to non-separable data using soft margins.

Lecture 16 - Radial Basis Functions - An important learning model that connects several machine learning models and techniques.

Lecture 17 - Three Learning Principles - Major pitfalls for machine learning practitioners; Occam?s razor, sampling bias, and data snooping.

Lecture 18 - Epilogue - The map of machine learning. Brief views of Bayesian learning and aggregation methods.},
keywords= {},
terms= {},
superseded= {}

Send Feedback