— title: "Lab 6" author: "Your name here" date: "Write the date here" output: pdf_document — The function regsubsets() in the library `leaps` can be used for regression subset selection. Thereafter, one can view the ranked models according to different scoring criteria by plotting the results of regsubsets(). Before using the function for the first time you will need to install the `leaps` package. One way to do this is to type `install.packages("leaps")` in the console window. It is probably best not to put this command in your R Markdown file since you only need to run the command once (not every time you knit your document). Recall that in R Markdown we can display R code without running it by placing `eval=FALSE` at the top of the R code chunk. “`{r eval=FALSE} install.packages("leaps") “` If you have installed the package correctly then the following commands should load in the `leaps` and `MASS` packages. “`{r} library(leaps) library(MASS) “` IF YOU GET AN ERROR IN THE PREVIOUS R CODE CHUNK PLEASE REREAD THE ABOVE PARAGRAPHS. ## Example 1: Best-subsets Regression The `Boston` dataset within the `MASS` package contains information collected by the U.S Census Service concerning housing in the area of Boston Massachusetts. It was obtained from the StatLib archive (http://lib.stat.cmu.edu/datasets/boston), and has been used extensively throughout the literature to benchmark algorithms. There are 14 attributes in each case of the dataset. They are: * CRIM – per capita crime rate by town * ZN – proportion of residential land zoned for lots over 25,000 sq.ft. * INDUS – proportion of non-retail business acres per town. * CHAS – Charles River dummy variable (1 if tract bounds river; 0 otherwise) * NOX – nitric oxides concentration (parts per 10 million) * RM – average number of rooms per dwelling * AGE – proportion of owner-occupied units built prior to 1940 * DIS – weighted distances to five Boston employment centres * RAD – index of accessibility to radial highways * TAX – full-value property-tax rate per $10,000 * PTRATIO – pupil-teacher ratio by town * B – 1000(Bk – 0.63)^2 where Bk is the proportion of blacks by town * LSTAT – % lower status of the population * MEDV – Median value of owner-occupied homes in $1000's “`{r} summary(Boston) dim(Boston) “` Use the median home value (`medv`) as the dependent variable and determine which of the five independent variables (`nox`, `rm`, `age`, `crim`, `tax`) should be included in the regression model using the`regsubsets` command to perform "Best-subsets Regression". “`{r} BSR=regsubsets(medv~nox+rm+age+crim+tax, data=Boston) “` Once we have used the `regsubsets` command we can view a plot that shows us which variables are significant according to the adjusted R-square. “`{r} plot(BSR, scale="adjr2") “` Here black indicates that a variable is included in the model, while white indicates that they are not. The model containing all variables minimizes the adjusted R-square criteria as can be determined by looking at the top row of the plot. We can see that the regression equation with the highest adjusted R-square includes every variable except `nox`. We can verify that the best number of variables is 4 by using the following code: “`{r} reg.summary=summary(BSR) which.max(reg.summary$adjr2) “` The resulting best fit using 4 variables can be shown using the `coef` function: “`{r} coef(BSR,4) “` The predict command is not set up to handle the `regsubsets` so we create a function that predicts for us. “`{r} predict.regsubsets = function(object, newdata, id, …) { form = as.formula(object$call[[2]]) mat = model.matrix(form, newdata) coefi = coef(object, id) mat[, names(coefi)] %*% coefi } “` Here we will make a prediction based on the original data set with the number of variables set to 4. “`{r} pred=predict.regsubsets(BSR,Boston, id=4) head(pred) “` ## Example 2: One variable-at-a-time Models As you should recall from class going through every unique model can be computationally expensive. Therefore, we can consider some other methods that consider adding or subtracting one variable-at-a-time. The methods that consider one-variable-at-a-time are: * Backward elimination * Forward selection * Stepwise regression. The above three methods are useful when the number of dependent variables is large and it is not feasible to fit all possible models. In this case, it is more efficient to use a search algorithm (e.g., Forward selection, Backward elimination and Stepwise regression) to find the best model. The R function step() can be used to perform variable selection. To perform forward selection we need to begin by specifying a starting model and the range of models which we want to examine in the search. “`{r} null=lm(medv~1, data=Boston) null “` “`{r} full=lm(medv~nox+rm+age+crim+tax, data=Boston) full “` We can perform forward selection using the command: “`{r} step(null, scope=list(lower=null, upper=full), direction="forward") “` This tells R to start with the null model and search through models lying in the range between the null and full model using the forward selection algorithm. According to this procedure, the best model is the one that includes the variables Taxes, Size, Lot and Baths. We can perform backward elimination on the same data set using the command: “`{r} step(full, data=Boston, direction="backward") “` and stepwise regression using the command: “`{r} stepwise=step(null, scope = list(upper=full), data=Boston, direction="both") stepwise “` All of the algorithms yield equivalent results in this simple example. Let's make a prediction based on the stepwise model. “`{r} pred_stepwise=predict(stepwise,Boston) head(pred_stepwise) “` ## Problem 1: Carseat Sales Revisited Again Your challenge is to use the data from the inclass Kaggle challenge (https://www.kaggle.com/t/8f161e4d717f443a8bfecae2de5bf872). First read in the data and get rid of the observation IDs. “`{r} data1=read.csv("Carseats_training.csv") data1$ID=NULL “` Now run the following command to perform best-subsets regression. “`{r} BSR1=regsubsets(Sales~., data=data1) “` a. Make a plot of the results according to adjusted R-square. (5 points) “`{r} # Your code here “` b. Find out how many variables are used in the best subset according to the adjusted R-square. (5 points) “`{r} # Your code here “` c. State the coefficients of the model in the aforementioned best subset. (5 points) “`{r} # Your code here “` d. Perform stepwise regression on this data. (5 points) “`{r} # Your code here “` e. Compare the variables selected from stepwise regression versus the variables selected from the best subsets approach. Which model would you trust the most? Why? (5 points) *Your answer here.* f. Make predictions on your test set and upload the results to kaggle. (5 points) “`{r} test=read.csv("Carseats_testing.csv") test$Sales=rep(1,80) # Your code here “` g. Report your RMSE as measured by Kaggle. (7 points) *Your answer here.* h. Is your submission submitted properly submitted to Kaggle? You will be graded based upon whether or not your name is on the leaderboard with a RMSE that is consistent with what you reported in the previous part. (7 points) ## Problem 2: Restaurant Revenue Prediction Revisited Your challenge is to use the data from the inclass Kaggle challenge (https://www.kaggle.com/t/b97c47b1cc7742b2b0920b078b04d88e) and make a submission. Read in the data. “`{r} data2=read.csv("RR_train.csv") “` a. Fit a backward elimination model. (5 points) “`{r} # Your code here “` b. Predict the test set using this model and upload to kaggle. (5 points) “`{r} # Your code here “` c. What is your resulting RMSE from the previous part as reported by kaggle? (7 points) *Your answer here.* d. Did you make a submission properly to Kaggle? You will be graded based upon whether or not your name is on the leaderboard with a RMSE that is consistent with what you reported in the previous part. (7 points) e. Fit a model using best-subsets regression. (5 points) “`{r} # Your code here “` f. Predict the test set using the model with the highest adjusted R-square and upload to kaggle. (8 points) “`{r} # test2$revenue=rep(1,37) # Your code here “` g. What is your resulting RMSE from the previous part as reported by kaggle? (7 points) *Your answer here.* h. Did you make a submission properly to Kaggle? You will be graded based upon whether or not your name is on the leaderboard with a RMSE that is consistent with what you reported in the previous part. (7 points) ## Problem 3: Markdown Now you will modify your document so that it is in pristine format. Knit the document as a pdf. You will need to submit this file and the pdf you created. Submit these two files to blackboard. Partial credit will not be given for this problem. (5 points)