Question 1

In this problem, you will develop models to predict the wine type based on the Wine data set.

  1. Explore the data graphically in order to investigate the association between Type and the other features. Which of the other features seem most likely to be useful in predicting Type? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
  2. Perform LDA, QDA and Naive Bayes on the training data in order to predict Type. What are the test errors of the models obtained?

(a) \(\mathbf{Solution.}\qquad\) Below we show box plots of Type vs all other predictors. We see that the most important features that differentiate Type are Alcohol, Phenols, Flavanoids, and Proline since they have the least amount of overlap. Other potential features are Color a Hue since they seem to split two types from one other type.

   

(b) \(\mathbf{Solution.}\qquad\) Below we compute the test errors for LDA, QDA and Naive Baayes.

We find that the test errors for LDA, QDA, and Naive Bayes are 0.0182, 0.0364, 0.0364 respectively.

Question 2

Use the \(k\) -nearest neighbor classifier on the Theft dataset. Use cross-validation to select the best \(k\) and use the test data to evaluate the performance of the selected model. Show the training, cross-validation and test errors for each choice of \(k\) and report your findings.

\(\mathbf{Solution.}\qquad\)

# QUESTION 2 ------------------------------------------------------------------
theft_train = read.csv("./Data/theft_train.csv", header = TRUE)
theft_test = read.csv("./Data/theft_test.csv", header = TRUE)

## Kfold_CV_knn function from Lab 06
Kfold_CV_knn <- function(K, K_knn, train, train_label) {
  fold_size = floor(nrow(train) / K)
  cv_error = rep(0, K)
  for(i in 1:K) {
    # iteratively select K-1 folds as training data in CV procedure, remaining 
    # as test data.
    if(i != K) {
      CV_test_id = ((i - 1) * fold_size + 1):(i * fold_size)
    }
    else {
      CV_test_id = ((i - 1) * fold_size + 1):nrow(train)
    }
    CV_train = train[-CV_test_id, ]
    CV_test = train[CV_test_id, ]
    # calculate the mean and standard deviation using CV_train
    mean_CV_train = colMeans(CV_train)
    sd_CV_train = apply(CV_train, 2, sd)
    # normalize the CV_train and CV_test using above mean and sd
    CV_train = scale(CV_train, center = mean_CV_train, scale = sd_CV_train)
    CV_test = scale(CV_test, center = mean_CV_train, scale = sd_CV_train)
    # Fit knn
    pred_CV_test = knn(CV_train, CV_test, train_label[-CV_test_id], k = K_knn)
    # Calculate CV error by taking averages
    cv_error[i] = mean(pred_CV_test != train_label[CV_test_id])
  }
  return(mean(cv_error))
}

set.seed(2020)
theft = 3
K_fold = 10
K_knn = seq(from = 2, to = 50, by = 2)
cv_error = rep(0,length(K_knn))
train_err = rep(0, length(K_knn))
test_err = rep(0, length(K_knn))

# data normalization for training and test data using means and stds from 
# training data.
mean_train = colMeans(theft_train[, -theft])
sd_train = apply(theft_train[, -theft], 2, sd)
K_train = scale(theft_train[, -theft], center = mean_train, scale = sd_train)
K_test = scale(theft_test[, -theft], center = mean_train, scale = sd_train)

for (i in 1:length(K_knn)) {
  cv_error[i] = Kfold_CV_knn(K_fold, K_knn[i], theft_train[, -theft], 
                             theft_train$theft)
  train_pred = knn(K_train, K_train,
                   cl = theft_train$theft, k = K_knn[i])
  
  test_pred = knn(K_train, K_test,
                  cl = theft_train$theft, k = K_knn[i])
  train_err[i] = mean(train_pred != theft_train$theft)
  test_err[i] = mean(test_pred != theft_test$theft)
}

best_k = K_knn[which(cv_error == min(cv_error))]
c(min(cv_error), best_k)
## [1]  0.3705714 32.0000000

We find that our CV approach picks \(k=\) 32 and the corresponding CV, train, and test errors are 0.3706, 0.3414, 0.384 which are quite close.

   

Question 3

The textbook (“An Introduction to Statistical Learning with Applications in R”) describes that the cv.glm() function can be used in order to compute the LOOCV error estimate. Alternatively, one could compute those quantities using just the glm() and predict.glm() functions, and a for loop. You will now take this approach in order to compute the LOOCV error for a logistic regression model on the Weekly data set (in the ISLR package).

  1. Fit a logistic regression model that predicts Direction using Lag 1 and Lag2. Report and comment on the result.
  2. Fit a logistic regression model that predicts Direction using Lag1 and Lag2 using all but the first observation. Report and comment on the result.
  3. Use the model from (b) to predict the direction of the first observation. You can do this by predicting that the first observation will go up if \(Pr(\mathrm{Direction="Up"}\mid \mathrm{Lag1,Lag2})>0.5\) . Was this observation correctly classified?
  4. Write a for loop from \(i=1\) to \(i=n,\) where \(n\) is the number of observations in the data set, that performs each of the following steps:
    1. Fit a logistic regression model using all but the \(i\)th observation to predict Direction using Lag1 and Lag2.
    2. Compute the posterior probability of the market moving up for the \(i\)th observation.
    3. Use the posterior probability for the \(i\)th observation in order to predict whether or not the market moves up.
    4. Determine whether or not an error was made in predicting the direction for the \(i\)th observation. If an error was made, then indicate this as a \(1,\) and otherwise indicate it as a \(0 .\)
  5. Take the average of the \(n\) numbers obtained in (d)iv in order to obtain the LOOCV estimate for the test error. Comment on the results.

(a) \(\mathbf{Solution.}\qquad\) The logistic regression that Lag1 is not significant while Lag2 is significant.

##                Estimate Std. Error   z value     Pr(>|z|)
## (Intercept)  0.22122405 0.06146572  3.599145 0.0003192652
## Lag1        -0.03872222 0.02621658 -1.477013 0.1396722362
## Lag2         0.06024830 0.02654589  2.269590 0.0232324586

   

(b) \(\mathbf{Solution.}\qquad\) Similarly to 3(a), we find that Lag1 is not significant while Lag2 is significant.

##                Estimate Std. Error   z value     Pr(>|z|)
## (Intercept)  0.22324305 0.06149894  3.630031 0.0002833875
## Lag1        -0.03843317 0.02621860 -1.465874 0.1426825151
## Lag2         0.06084763 0.02656088  2.290874 0.0219707105

   

(c) \(\mathbf{Solution.}\qquad\) Below we compute our prediction.

We find that the corresponding probability is 0.571. However this observation was incorrectly classified since it is supposed to be Down.

   

(d) \(\mathbf{Solution.}\qquad\)

   

(e) \(\mathbf{Solution.}\qquad\) We find that LOOCV the test error is about 0.45, which means 45% of the observations were misclassified. We could potentially improve this classification model by adding other predictors and excluding Lag1.

## [1] 0.4499541