Erjin recommended this article to me.
I found it extremely useful as well.
So I post it here in case if I need to revisit it in the future.
I tried to use Auto Encoder and PCA to do Dimensionality Reduction.
The dataset is from House Prices: Advanced Regression Techniques.
I transformed the data from 79D to 30D, and then reconstruct the data to 70D.
Here’s the result:
As you can see, PCA even did a better job.
And now I come to a conclusion that Auto Encoder is good at seeking different patterns and when fitting a single pattern PCA is a better choice.
Here’s how some tensors from an Auto Encoder oscillate:
It seems that they are trying to capture something.
Well, this is my first time to visualize a neural network. And it feels like reaching a completely new world!
Well, this is my first try on Kaggle.
And the result is not so good:
The first time I saw the picture below, I thought it’s just a joke. However, after my first try, it’s true, though I tried my best to avoid it.
(picture from: xkcd, CC BY-NC 2.5)
This is definitely the most impressive picture I’ve seen this month!
(from NN4ML – Geoffrey Hinton)
I found two useful websites recently.
This website can find abbreviation from the full name of your project.
A cool abbreviation makes your project much posher!
This website can provide you with semantic suggestions.
It’s quite useful for students when writing English articles.
After finishing the course “Machine Learning” by Andrew NG, here are two more courses to take:
1. CS231n: Convolutional Neural Networks for Visual Recognition
2. Neural Networks for Machine Learning
And this is the site of Hung-yi Lee: http://speech.ee.ntu.edu.tw/~tlkagk/index.html
There are some good lectures can be read.
—————————— UPD 2017.8.13 ——————————
slides of CS231n can be found here: http://cs231n.stanford.edu/slides/2017/
vedios of CS231n can be found on youtube or bilibili
—————————— UPD 2018.3.4 ——————————
I found this post very useful: https://zhuanlan.zhihu.com/p/25005808