Machine Learning – Stanford -Week 4 – Neural Networks: Representation

by Fuyang

 

The following content from edX course Machine Learning taught by Andrew Ng.

Introduction to Neural Networks

Welcome to week 4! This week, we are covering neural networks. Neural networks is a model inspired by how the brain works. It was very widely used in 80s and early 90s; popularity diminished in late 90s (computationally expensive).

Recent resurgence: State-of-the-art technique for many applications. It is widely used today in many applications: when your phone interprets and understand your voice commands, it is likely that a neural network is helping to understand your speech; when you cash a check, the machines that automatically read the digits also use neural networks.


 

Why we need a new algorithm?

1.motivation.1

1.motivation.2

Feature “explosion” for logistic regression. Neural network will hopefully help solving this issue.

The “one learning algorithm” hypothesis
2.neurons.12.neurons.22.neurons.3


 

How it can solve complex non-linear problem2.neurons.42.neurons.5

Key point

If network has s_j units in layer j, s_{j+1} units in layer j+1, then \Theta^{(j)} will be of dimension s_{j+1} \times (s_j + 1)

This is very important to know or memorize later on for doing vectorized implementation. Or see pop quiz below.

3.neurons.forward.propagation.13.neurons.forward.propagation.2


 

Non-linear classification example

4.non-linear.example.14.non-linear.example.and.14.non-linear.example.or.14.non-linear.example.or.24.non-linear.example.or.3


 

Multiple output units: One-vs-all

5.multi-example.15.multi-example.2


 

Pop Quiz

6.pop.quiz.16.pop.quiz.1.1.26.pop.quiz.1.16.pop.quiz.1.26.pop.quiz.1.36.pop.quiz.2.16.pop.quiz.2.26.pop.quiz.3.16.pop.quiz.3.2