Difference between revisions of "Machine Learning"

From TedYunWiki
Jump to navigation Jump to search
Line 11: Line 11:
 
=== Terminologies ===
 
=== Terminologies ===
  
* $x^{(j)}_i$ feature vectors
+
* $x^{(i)}_j$ feature vectors
* $y_i$ outcomes
+
* $y^{(i)}$ outcomes
 
* $h_\theta(x)$ the hypothesis
 
* $h_\theta(x)$ the hypothesis
 
* $J(\theta)$ the cost function
 
* $J(\theta)$ the cost function

Revision as of 20:02, 21 May 2016

Types of Machine Learning

  • Supervised Learning
    • Regression Problem: Continuous valued output.
    • Classification Problem: Discrete valued output.
  • Unsupervised Learning
    • Clustering

Linear Regression

Terminologies

  • $x^{(i)}_j$ feature vectors
  • $y^{(i)}$ outcomes
  • $h_\theta(x)$ the hypothesis
  • $J(\theta)$ the cost function
  • $\alpha$ the learning rate

Advanced Optimization Algorithms

There are advanced algorithms (from numerical computing) to minimize the cost function other than the gradient descent. For all of the following algorithms all we need to supply to the algorithm is a code to compute the function $J(\theta)$ (the cost function) and the partial derivatives of the cost function $\frac{\partial}{\partial \theta_i} J(\theta)$.

  1. Conjugate gradient
  2. BFGS
  3. L-BFGS

Advantages

  • No need to manually pick $\alpha$ (the learning rate in gradient descent)
  • Often faster than gradient descent

Disadvantages

  • More complex

Classification Problem

Logistic Regression

  • $h_\theta(x) = 1 / (1 + e^{-\theta^T x})$. Note $f(z) = 1 / (1 + e^{-z})$ is called the sigmoid function / logistic function.
  • $J(\theta) = - y \cdot \log h_\theta(x) - (1 - y) \cdot \log (1-h_\theta(x))$. This comes from Maximum Likelihood Estimation in Statistics.

Cocktail Party Problem

  • Algorithm
    • [W, s, v] = svd((repmat(sum(x.*x, 1), size(x, 1), 1).*x)*x');