This is definitely the most impressive picture I’ve seen this month!

(from NN4ML – Geoffrey Hinton)

# Practical Tricks

I found two useful websites recently.

### 1. Acronymify!

This website can find abbreviation from the full name of your project.

A cool abbreviation makes your project much posher!

### 2. Linggle

This website can provide you with semantic suggestions.

It’s quite useful for students when writing English articles.

# Notes on “A Neural Probabilistic Language Model”

## Ⅰ. Distributed Representation

### 1. Fight the curse of dimensionality

As sentences with similar meanings can use quite different words, it’s very hard for the n-gram model to generalize. In the paper, they propose to use Distributed Representation to fight the curse of dimensionality. The Distributed Representation give the model the ability to **make use of semantic information**. This feature itself do improve the model. And this propose let each training sentence inform the model an exponential number of semantically neighboring sentences which makes the model generalize much better.

### 2. Deal with out-of-vocabulary words

It’s clear to see that the only stuff we need to deal with is to **assign this out-of-vocabulary word a distributed representation**. So to do that, we firstly consider this unknown word $j$ as a blank which need to be filled. And we use the network to estimate the probability $p_i$ of each word $i$ in the vocabulary. Donating $C_i$ as the distributed representation for word $i$ in the vocabulary, we assign $\sum\limits_{i \ in \ vocabulary}{C_i \cdot p_i}$ to $C_j$ as the distributed representation for word $j$. After that we can incorporate word $j$ in the vocabulary and use this slightly larger vocabulary for further computation.

This approach is quite elegant because this is exactly how human brain works.

### 3. Deal with polysemous words

We can just simply assign multiple distributed representations for a single polysemous word.

### 4. comparison with n-gram models

Use distributed representation to make use of semantic information and to turn discrete random variables to continuous variables. And these two features let the network generalize much better than n-gram models.

## Ⅱ. Improve computation efficiency

### 1. Represent the conditional probability with a tree structure

For each classification it will go through $O(\log n)$ nodes in the tree structure. This makes the network use much less parameters to process.

### 2. Parallel implementation

In this paper, data-based parallel implementation and parameter-based implementation are mentioned.

## Ⅲ. Relating to Strong AI

### 1. Taking advantage of prior knowledge

In this paper they take advantage of semantic information.

### 2. Decompose the whole network into smaller parts

This can make computation faster and easier to adopt it to other works. In ** Opening the black box of Deep Neural Networks via Information**, it’s said that a large amount of computation is used to compression of input to efficient representation. So if we can modularize the network and set up a set of general APIs, it can make a huge difference in practical implementation.

## Ⅳ. Paper involves

# Perceptrons

### Definition

A two-class classifier.

A Feedforward Neural Network without hidden layers.

More specifically: It maps its input $x$ to output $f(x)$ with parameter $\theta$:

$$

\begin{equation}

f(x) = \begin{cases}

0 & \text{if }x \cdot \theta > 0 \\

1 & \text{otherwise}

\end{cases}

\end{equation}

$$

### Learning Algorithm

- Random initialize $\theta$
- For each example $i$ in the training set, update parameter $\theta$ as follow:

$$\theta = \theta + (y_i – f(\theta \cdot x_i)) \cdot x_i$$ - Repeat
`step 2`

until the classifier classifies most examples correctly.

### Properties

- Let’s change a view. Let each example donates a constraint. And the expected $\theta$ is in the intersection of those constraint. If you know Linear Programming, then you will see that this is actually a Half-Plane Intersection Problem.
- It’s clear to see that if $\theta_1, \theta_2$ is legal, then $a\theta_1 + b\theta_2$ is legal when $a + b = 1, a, b \ge 0$. It easy to proof: Half-Plane is a convex set. The intersection of some convex sets is convex. And this property can help to prove the convergence of this algorithm.
- If you know how to solve Linear Regression Problems with Gradient Descent, then you will know that sometimes we may pass the best solution if we choose a learning rate which is too large. In this algorithm, the same problem exists. So this algorithm will converge. But it may not converge to a solution which fit the dataset perfectly. So consider “generously feasible solution” that lie within the feasible region by a margin at least as great as the length of the input vector that defines each constraint plane. The algorithm can only be proved that it will converge to the “generously feasible solution”. Thus the solution in the proof of convergence below means “generously feasible solution”.

### Convergence

As you can see from the definition, Perceptron is a linear classifier. Thus, if the dataset is not linearly separable, this algorithm will not converge.

If you can imagine to plot the dataset to a plane(space), with knowledge of linear algebra you can easily have an intuition that we are actually adjusting the decision boundary(separating hyperplane) according to every single example. And each iteration makes the parameter $\theta$ better. Actually this algorithm will converge. And here is the proof:

- Because the feasible solution is a convex set, the modification made from other examples won’t make the decision boundary worse.
- As for a single misclassified example $i$, each modification will change the value of $\theta \cdot x$ by $||x_i||^2$. And the total value need to be correct is $|\theta \cdot x_i|$. Thus, the maximum number of iteration before the classifier makes the right classification for each example is $O(\max\limits_i{(\frac{|\theta \cdot x_i|}{||x_i||^2})})$

And here is a more rigorous proof, read it if you like: http://leijun00.github.io/2014/08/perceptron/

### Disadvantages

The most well-known disadvantage of this algorithm is that it can’t simulate the **XOR Function**. But actually there are more general theorems, like “Group Invariance Theorem”. So I decide to read ** Perceptrons: an introduction to computational geometry(Minsky & Papert，1969)** first. Then I will come back to finish this part.

—————————— UPD 2017.8.15 ——————————

I thought ** Perceptrons: an introduction to computational geometry** is a paper. But it’s actually a book with 292 pages! So give it up to read it for now. Maybe I will read it in university?

# Summary of “Machine Learning by Andrew NG”

### Preface

I found this course from zhihu. Lots of people recommend this course as “the best way to start with Machine Learning”. So I spent two weeks to finish this course. After finishing it, I found it a great course as well! So here’s the link: https://www.coursera.org/learn/machine-learning/home/welcome. It won’t cost you so much time(about 50 hours are enough), but will lead you to a new world.

### Problems in this course

According to wikipedia, here are 5 subfields:

1. **Classification**: To divide inputs to known classes.

2. **Regression**: To estimate the relationships among variables.

3. **Clustering**: To divide inputs to classes. Unlike in classification, the groups are not known beforehand.

4. **Density estimation**: To find the distribution of inputs in some space.

5. **Dimensionality reduction**: To simplify inputs by mapping them into a lower-dimensional space.

In this course, all these 5 topics are involved.

### Algorithms in this course

**Gradient Descent**: A powerful algorithm to solve Classification Problems and (Linear) Regression Problems. This algorithm use derivative of the cost function to minimize the cost function.**Stochastic Gradient Descent**: A variant of Gradient Descent. When dealing with a large amount of data, it’s much faster than Gradient Descent. But it’s a little bit harder to converge.**Mini-Batch Gradient Descent**: A variant of Gradient Descent. It cost less time to complete a single iteration than Gradient Descent, but slower than Stochastic Gradient Descent. But it can fit data better than Stochastic Gradient Descent. Actually you can regard this algorithm as a compromise between the original Gradient Descent and Stochastic Gradient Descent.**Collaborative Filtering**: A variant of Gradient Descent. It’s often used in Recommender system.**Normal Equation**: A great way to solve Linear Regression Problems. It use numerical tricks to fit the data perfectly. In this algorithm we have to compute the inverse of a matrix, which can be solved in $O(n^3)$. So this algorithm can’t deal with datasets with so much features.**Support vector machine(SVM)**: A powerful tool to solve Classification Problems and (Linear) Regression Problems. In this course, Andrew explains the application of this algorithm in classification problems, and it can be described as a Large Margin Classifier. Furthermore the cost function of SVMs is convex, so it won’t be trapped in the local optimum. Moreover with the “Kernel trick”, it can fit nonlinear hypothesis well.**Neural Network(Backpropagation)**: The most popular algorithm in Machine Learning. Neural networks try to simulate our brain, so it’s believed as the most possible way to build strong AI. And Backpropagation use derivative of the cost function to minimize the cost function. It’s easy to learn, and perform well on many problems.**K-Means Algorithm**: This algorithm try to find patterns in data by itself. It divides data to different unknown classes. It’s useful in analysis.**(Multivariate)Gaussian Distribution Algorithm**: An algorithm based Gaussian Distribution to solve Density Estimation Problems. Widely used in Anomaly Detection.

### Useful Tricks

**Feature Scaling**: Scale data to make algorithms work better. Widely used in Gradient Descent and other algorithms.**One-vs-All**: This trick allows you to do very little modification on your two-class classifier to make it a multi-class classifier.**Regularization**: It’s the most useful way to solve overfitting problems.**Gradient Check**: An easy numerical way to determine whether your implement of cost function is bug-free.**Random Initialization**: A necessary part of Neural Network. And Random Initialize for several times is also a good way to increase the possibility to find global optimum rather than local optimum.**Train/Validation/Test set**: A way to assign your dataset. It’s widely used in almost every single algorithm.**Learning Curve**: A good way to evaluate your algorithm. And it can help you to decide how to improve your algorithm.**Precision/ Recall/ $F_1$ Score**: A good way to evaluate your algorithm, especially when your dataset it skewed.**Principal Component Analysis(PCA)**: A good way to compress your data. It can reduce the number of principal components. This can speed up your algorithm. Also it can help you to visualize your data.**Ceiling Analysis**: A way to the pipeline of your Machine Learning system. It can help you to decide which component to optimize worth the most.

### Important Ideas

- Build a naive system as fast as possible. Optimize your system later.
- Do analyze your system. Let the result of analysi tell you what to do next instead of intuition.

# Further Plan

After finishing the course “Machine Learning” by Andrew NG, here are two more courses to take:

1. CS231n: Convolutional Neural Networks for Visual Recognition

2. Neural Networks for Machine Learning

And this is the site of Hung-yi Lee: http://speech.ee.ntu.edu.tw/~tlkagk/index.html

There are some good lectures can be read.

—————————— UPD 2017.8.13 ——————————

slides of CS231n can be found here: http://cs231n.stanford.edu/slides/2017/

vedios of CS231n can be found on youtube or bilibili