If you've ever dipped your toe into the cold & murky pool of data processing, you've probably heard of principal component analysis (PCA). PCA is a classy way to reduce the dimensionality of your data, while (purportedly) keeping most of the information. It's ubiquitously well-regarded.

Observations:

But is it

*actually*a good idea in practice? Should you apply PCA to your data before, for example, learning a classifier? This post will take a small step in the direction of answering this question.
I was inspired to investigate PCA by David MacKay's amusing response to an Amazon review lamenting PCA's absence in MacKay's book:

Exploring this point, I'm going to report test classification accuracy before & after applying PCA as a dimensionality reduction technique. Since, as MacKay points out, variable normalizations are important, I tried each of the following combinations of normalization & PCA before classification:

"Principal Component Analysis" is a dimensionally invalid method that gives people a delusion that they are doing something useful with their data. If you change the units that one of the variables is measured in, it will change all the "principal components"! It's for that reason that I made no mention of PCA in my book. I am not a slavish conformist, regurgitating whatever other people think should be taught. I think before I teach. David J C MacKay.Ha! He's right, of course. Snarky, but right. The results of PCA depend on the scaling of your data. If, for example, your raw data has one dimension that is on the order of $10^2$ and another on the order of $10^6$, you may run into trouble.

Exploring this point, I'm going to report test classification accuracy before & after applying PCA as a dimensionality reduction technique. Since, as MacKay points out, variable normalizations are important, I tried each of the following combinations of normalization & PCA before classification:

- None.
- PCA on the raw data.
- PCA on sphered data (each dimension has mean 0, variance 1).
- PCA on 0-to-1 normalized data (each dimension is squished to be between 0 and 1).
- ZCA-whitening on the raw data (a rotation & scaling that results in identity covariance.).
- PCA on ZCA-whitened data.

- Each RF was trained to full depth with $100$ trees and $\sqrt{d}$ features sampled at each split. I use MATLAB & this RF package.
- The random forest is not sensitive to any dimension-wise normalizations, and that's why I don't bother comparing RF on the raw data to RF on standard normalized & 0-1 normalized data. The performance is identical! (That's one of many reasons why we <3 random forests).
- PCA in the above experiments is always applied as a dimensionality reduction technique - the principal components that explain 99% of the variance are kept, and the rest are thrown out (see details here).
- ZCA is usually used as
*normalization*(and not as dimensionality reduction). Rotation*does*affect the RF, and that's why experiment (5) is included. - PCA and ZCA require the data to have zero mean.
- The demeaning, PCA/ZCA transformations, and classifier training were all done on the training data only, and then applied to the held-out test data.

Dataset | Raw Accuracy |
PCA | Sphere + PCA |
0-to-1 + PCA |
ZCA | ZCA + PCA |
---|---|---|---|---|---|---|

proteomics | 86.3% | 65.4% | 83.2% | 82.8% | 84.3% | 82.8% |

dnasim | 83.1% | 75.6% | 73.8% | 81.5% | 85.8% | 86.3% |

isolet | 94.2% |
88.6% | 87.9% | 88.6% | 74.2% | 87.9% |

usps | 93.7% |
91.2% | 90.4% | 90.5% | 88.2% | 88.4% |

covertype | 86.8% | 94.5% | 93.5% | 94.4% | 94.6% |
94.5% |

Observations:

- Applying PCA to the raw data can be disastrous. The proteomics dataset has all kinds of wacky scaling issues, and it shows. Nearly 20% loss in accuracy!
- For dnasim, choice of normalization before PCA is significant, but not so much for the other datasets. This demonstrates MacKay's point. In other words: don't just sphere like a "slavish conformist"! Try other normalizations.
- Sometimes rotating your data can create problems. ZCA keeps all the dimensions & the accuracy still drops for proteomics, isolet, and USPS. Probably because a bunch of the very noisy dimensions are mixed in with all the others, effectively adding noise where there was little before.
- Try ZCA and PCA - you might get a fantastic boost in accuracy. The covertype accuracy in this post is better than
*every*covertype accuracy Alex reported in his previous post. - I also ran these experiments with a 3-tree random forest, and the above trends are still clear. In other words, you can efficiently figure out which combo of normalization and PCA/ZCA is right for your dataset.

There is no simple story here. What these experiments have taught me is (i) don't apply PCA or ZCA blindly but (ii)

Addendum: a table with dimensionality before & after PCA with various normalizations:

*do*try PCA and ZCA, they have the potential to improve performance significantly. Validate your algorithmic choices!Addendum: a table with dimensionality before & after PCA with various normalizations:

Dataset | Original Dimension |
PCA | Sphere + PCA |
0-to-1 + PCA |
ZCA + PCA |
---|---|---|---|---|---|

proteomics | 109 | 3 | 57 | 24 | 65 |

dnasim | 403 | 2 | 1 | 3 | 13 |

isolet | 617 | 381 | 400 | 384 | 606 |

usps | 256 | 168 | 190 | 168 | 253 |

covertype | 54 | 36 | 48 | 36 | 49 |

would you mind posting your code?

ReplyDeleteYou reduce the dimension using PCA by keeping only as many eigenvectors as needed to explain 99% of the variance -- what's the dimensionality, then, of the transformed data? How much lower is it than dimensionality of the greedy forward feature selection in your last post?

ReplyDeletemaverick: it's kind of a mess! i can post individual functions or datasets, if there's something in particular you're interested in.

ReplyDeletebrooks: good question. i'll run some experiments later & make an addendum to the post.

It is well known that PCA can remove the data that contains the features which are essential for classification. PCA dimensionality reduction maintains what is common in data and not what differentiates them.

ReplyDeletePiotr: On the other hand, PCA might combine several noisy redundant features into a single axis, which could potentially be beneficial. I don't think it's possible to say what effect PCA will have without reference to particular data.

ReplyDeleteSergey: It *seems* (very dangerous word) that linear classifiers should be affected by rotations in the feature space differently that axis-aligned thresholders (aka, decision trees). Any chance you'll try the same experiments with a linear SVM?

an unregularized linear classifier will not be affected by rotations and scaling. however! when using regularization, the same regularization parameter may yield better or worse results. if you cross-validate thoroughly, i think you should be able to get nearly identical performance.

Deletejust for fun though, i tried a multi-class regularized ridge regression classifier before and after ZCA (rotation + scaling only). the results are very close, even though i used the same default regularization parameter.

What was the dimensionality of those various datasets? I wonder if PCA is more helpful if your number of dimensions is large with respect to the size of your dataset??

ReplyDeletehi amy! see the Addendum table at the bottom of the post =)

DeleteI hate how the answer with these things always seems to be "do it all the ways then validate"...

ReplyDeletework, work, work :-)

If you have a good predictor in your dataset and another variable, which is highly correlated to the good predictor, both will be projected to the same dimension and noise is added to the good predictor. It´s like blurring the "good" variable

ReplyDeleteI generated a toy dataset which illustrates that problem (btw. my post was inspired by yours):

http://machine-master.blogspot.de/2012/08/pca-or-polluting-your-clever-analysis.html