Tag: R
Python And R for Data Wrangling: Compare Pandas and Tidyverse Code Side-By-Side, and Learn Speed-Up Tips.
Medium Post
Prediction Using Caret Model Ensembling
INTRODUCTION
In this blog post, we will use the caret R package to predict the median California housing price. The original dataset can be found on the Kaggle website: https://www.kaggle.com/camnugent/california-housing-prices/kernels and it has 10 columns and 20641 rows.
The caret package is one of the most useful in R, offering a wide array of capabilities, ranging from data exploration and feature selection to implementation of a large number of models.
SUMMARY OF WORK IN THIS POST
First we sample 4000 random rows from the dataset for faster processing times. Then we do feature engineering, by removing the longitude and latitude columns, and creating new proportion features, such as people per household. We also remove rows with missing data.
Then we use caret for the following:
- Center and scale.
- Creation of the training and test set.
- One hot encoding (dummy vars).
- Feature selection using caret’s RFE method.
- Implementation of PLS, Lasso, Random Forest, XGB Tree, and SVMpoly regression.
- Model Comparison and model ensembling.
It is also noteworthy that we will utilize the multiple cores of our PC for faster processing, by using the doParallel library. Finally, we evaluate the performance of the models using the test set error.
Here is the link to the RMarkdown script:
CONCLUSION
As expected, the stacked model yielded the smallest test set error (not by much), but still the smallest.
3-way Variable Selection in R Regression (lasso,stepwise,and best subset)
In this post, you can find code (link below) for doing variable selection in R regression in three different ways. The variable selection was done on the well-known R dataset prostate. The data is inherently separated in train and test cases. The regressions were applied on the training data and then the prediction mean square error was computed for the test data.
- Stepwise regression: Here we use the R function step(), where the AIC criterion serves as a guide to add/delete variables. The regression implementation that is returned by step() has achieved the lowest AIC.
- Lasso regression: This is a form of penalized regression that does feature selection inherently. Penalized regression adds bias to the regression equation in order to reduce variance and therefore, reduce prediction error and avoid overfitting. Lasso regression sets some coefficients to zero, and therefore does implicit feature selection.
- Best subset regression: Here we use the R package leaps and specifically the function regsubsets(), which returns the best model of size m=1…,n where n is the number of input variables.
Regarding which variables are removed, it is interesting to note that:
- Lasso regression and stepwise regression result in the removal of the same variable (gleason).
- In best subset selection, when we select the regression with the smallest cp (mallow’s cp), the best subset is the one of size 7, with one variable removed (gleason again). When we select, the subset with the smallest BIC (Bayes Information Criterion), the best subset is the one of size 2 (the two variables that remain are lcavol and lweight).
Regarding the test error, the smallest values are achieved with lasso regression and best subset selection with regression of size 2.
Code for regression variable selection
Leave-one-out cross validation in R and computation of the predicted residual sum of squares(PRESS) statistic
We will use the gala dataset in the faraway package to demonstrate leave-one-out cross-validation. In this type of validation, one case of the data set is left out and used as the testing set and the remaining data are used as the training set for the regression. This process is repeated until each case in the data set has served as the testing set.
The key concept in creating the iterative leave-one-out process in R is creating a column vector c1 and attaching it to gala as shown below, This allows us to uniquely identify the row that is to be left out in each iteration.
> library(faraway)
> gala[1:3,]
Species Endemics Area Elevation Nearest Scruz Adjacent
Baltra 58 23 25.09 346 0.6 0.6 1.84
Bartolome 31 21 1.24 109 0.6 26.3 572.33
Caldwell 3 3 0.21 114 2.8 58.7 0.78
>c1<-c(1:30)
> gala2<-cbind(gala,c1)
> gala2[1:3,]
Species Endemics Area Elevation Nearest Scruz Adjacent c1
Baltra 58 23 25.09 346 0.6 0.6 1.84 1
Bartolome 31 21 1.24 109 0.6 26.3 572.33 2
> diff1<-numeric(30)
> for(i in 1:30){model1<-lm(Species~Endemics+Area+Elevation,subset=(c1!=i),data=gala2)
+ specpr<-predict(model1,list(Endemics=gala2[i,2],Area=gala2[i,3],Elevation=gala2[i,4]),data=gala2)
+ diff1[i]<-gala2[i,1]-specpr }
>summ1<-numeric(1)
>summ1=0
> for(i in 1:30){summ1<-summ1+diff1[i]^2}
> summ1
[1] 259520.5
The variable summ1 holds the value of the PRESS statistic.
Correlation of the columns of a dataframe in R
1. Columns contain numerical values.
Here we can use the function cor().
Example:
> library(MASS)
> cor(trees)
Girth Height Volume
Girth 1.0000000 0.5192801 0.9671194
Height 0.5192801 1.0000000 0.5982497
Volume 0.9671194 0.5982497 1.0000000
2. Some columns contain numerical and some contain ordinal values.
Here we can use the function hetcor() in R package polycor. This function computes Pearson correlations between numeric columns, polyserial correlations between numeric and ordinal variables, and polychoric correlations between ordinal variables. We will try it on dataset quine in the faraway package.
Example:
>library(faraway)
> library(polycor)
> quine[1:4,]
Eth Sex Age Lrn Days
1 A M F0 SL 2
2 A M F0 SL 11
3 A M F0 SL 14
4 A M F0 AL 5
> hetcor(quine)
Two-Step Estimates
Correlations/Type of Correlation:
Eth Sex Age Lrn Days
Eth 1 Polychoric Polychoric Polychoric Polyserial
Sex 0.008333 1 Polychoric Polychoric Polyserial
Age -0.02581 -0.08348 1 Polychoric Polyserial
Lrn 0.03389 -0.2393 -0.3187 1 Polyserial
Days -0.3504 0.1048 0.1773 0.05657 1
Standard Errors:
Eth Sex Age Lrn
Eth
Sex 0.1305
Age 0.1109 0.1103
Lrn 0.1307 0.1259 0.1026
Days 0.09246 0.1025 0.08507 0.1039
n = 146
P-values for Tests of Bivariate Normality:
Eth Sex Age Lrn
Eth
Sex <NA>
Age 0.8355 0.01814
Lrn <NA> <NA> 7.895e-11
Days 0.008092 0.009787 0.007213 0.005257