Principal component analysis on matrix using Python


[ad_1]

Machine learning algorithms can be time consuming to work with large data sets. To overcome this, a new dimension reduction technique has been introduced. If the input dimension is high, the main component algorithm can be used to speed up our machines. It is a projection method while retaining the characteristics of the original data.

In this article, we will discuss the basic understanding of the main component (PCA) on matrices with a python implementation. In addition, we implement this technique by applying one of the classification techniques.

Database

The dataset can be downloaded from the following link. The dataset gives details of breast cancer patients. It has 32 features with 569 lines.

Let’s get started. Import all the libraries required for this project.

import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline

Loading the dataset

dataset = pd.read_csv('cancerdataset.csv') 
dataset["diagnosis"]=dataset["diagnosis"].map({'M': 1, 'B': 0})
data=dataset.iloc[:,0:-1]
data.head()

We need to store the independent and dependent variables using the iloc method.

X = data.iloc[:, 2:].values 
y = data.iloc[:, 1].values 

Divide the training and test data in the 80:20 ratio.

from sklearn.model_selection import train_test_split 
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) 

PCA standardization

PCR can only be applied to digital data. It is therefore important to convert all data to digital format. We need to normalize the data to convert the characteristics of different units into the same unit.

from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import StandardScaler 
sc = StandardScaler()   
X_train = sc.fit_transform(X_train) 
X_test = sc.transform(X_test) 

Covariance matrix

On the basis of standardized data, we will build the covariance matrix. It gives the variance between each characteristic of our original dataset. The negative value in the result below represents are inversely dependent on each other.

mean_vec=np.mean(X_train,axis=0)
cov_mat=(X_train-mean_vec).T.dot((X_train-mean_vec))/(X_train.shape[0]-1)
mean_vect=np.mean(X_test,axis=0)
cov_matt=(X_test-mean_vec).T.dot((X_test-mean_vec))/(X_test.shape[0]-1)
print(cov_mat)

Proper decomposition on the covariance matrix

Each eigenvector will have an eigenvalue, and the sum of the eigenvalues ​​represents the variance in the data set. We can get the location of the maximum variance by calculating the eigenvalue. The eigenvector with the lower eigenvalue will give the smallest amount of variation in the data set. These values ​​should be deleted.

cov_mat=np.cov(X_train.T)
eig_vals,eig_vecs=np.linalg.eig(cov_mat)
cov_matt=np.cov(X_test.T)
eig_vals,eig_vecs=np.linalg.eig(cov_mat)
print(eig_vals)
print(eig_vecs)

We need to specify how many components we want to keep. The result gives a dimension reduction from 32 to 2 elements. The first and second PCA will capture the most variance in the original data set.

See also
Complete Guide to Regression
from sklearn.decomposition import PCA
from sklearn.decomposition import PCA 
pca = PCA(n_components = 2) 
X_train = pca.fit_transform(X_train) 
X_test = pca.transform(X_test) 
X_train.shape
pca.components_

In this matrix table, each column represents the original data and each row represents a PCA.

Fitting the DecisionTree regression to the training set

When we solve a classification problem, we can use the decision tree classifier for model prediction.

from sklearn.tree import DecisionTreeClassifier   
# Create Decision Tree classifier object
clf = DecisionTreeClassifier()
# Train Decision Tree Classifier
clf = clf.fit(X_train,y_train)
#Predict the response for test dataset
y_pred = clf.predict(X_test)

Algorithm evaluation

For classification tasks, we will use a confusion matrix to verify the correctness of our machine learning model.

from sklearn.metrics import confusion_matrix 
confusion = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True)
confusion

Trace the training set

from matplotlib.colors import ListedColormap 
  X1, y1 = X_train, y_train 
a, b = np.meshgrid(np.arange(start = X1[:, 0].min() - 1, 
                     stop = X1[:, 0].max() + 1, step = 0.01), 
                     np.arange(start = X1[:, 1].min() - 1, 
                     stop = X1[:, 1].max() + 1, step = 0.01)) 
plt.contourf(a, b, clf.predict(np.array([a.ravel(), 
             b.ravel()]).T).reshape(a.shape), alpha = 0.75, 
             cmap = ListedColormap(('white'))) 
plt.xlim(a.min(), a.max()) 
plt.ylim(X2.min(), X2.max()) 
for i, j in enumerate(np.unique(y_set)): 
    plt.scatter(X1[y1 == j, 0], X1[y1 == j, 1], 
                c = ListedColormap(('red','blue'))(i), label = j) 
plt.title('Decision Tree') 
plt.xlabel('PC1') # for Xlabel 
plt.ylabel('PC2') # for Ylabel 
plt.legend() # to show legend 
# show scatter plot 
plt.show()  

Final thoughts

In the article above, we explained how PCA is used for downsizing a large dataset. In addition, we have explored concepts such as the covariance matrix and the proper decomposition to calculate a principal component. Hope this article is helpful to you.


Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.


Join our Telegram Group. Be part of an engaging community


[ad_2]

About Florence L. Silvia

Check Also

Manchester’s new art space is set to open next year

A contemporary dance performance directed by filmmaker Danny Boyle and designed by British artist Es …