Home:ALL Converter>Factor Loadings using sklearn

Factor Loadings using sklearn

Ask Time:2014-01-19T22:03:00         Author:Riyaz

Json Formatter

I want the correlations between individual variables and principal components in python. I am using PCA in sklearn. I don't understand how can I achieve the loading matrix after I have decomposed my data? My code is here.

iris = load_iris()
data, y = iris.data, iris.target
pca = PCA(n_components=2)
transformed_data = pca.fit(data).transform(data)
eigenValues = pca.explained_variance_ratio_

http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html doesn't mention how this can be achieved.

Author:Riyaz,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/21217710/factor-loadings-using-sklearn
BigPanda :

Multiply each component by the square root of its corresponding eigenvalue:\n\npca.components_.T * np.sqrt(pca.explained_variance_)\n\n\nThis should produce your loading matrix.",
2017-02-04T23:05:30
Brad Solomon :

I think that @RickardSjogren is describing the eigenvectors, while @BigPanda is giving the loadings. There's a big difference: Loadings vs eigenvectors in PCA: when to use one or another?.\n\nI created this PCA class with a loadings method.\n\nLoadings, as given by pca.components_ * np.sqrt(pca.explained_variance_), are more analogous to coefficients in a multiple linear regression. I don't use .T here because in the PCA class linked above, the components are already transposed. numpy.linalg.svd produces u, s, and vt, where vt is the Hermetian transpose, so you first need to back into v with vt.T.\n\nThere is also one other important detail: the signs (positive/negative) on the components and loadings in sklearn.PCA may differ from packages such as R. \n More on that here:\n\nIn sklearn.decomposition.PCA, why are components_ negative?.",
2017-06-23T19:28:48
yy