Question 1

For each of parts (a) through (d), indicate whether we would generally expect the performance of a flexible statistical learning method to be better or worse than an inflexible method. Justify your answer.

  1. The sample size \(n\) is extremely large, and the number of predictors \(p\) is small.
  2. The number of predictors \(p\) is extremely large, and the number of observations \(n\) is small.
  3. The relationship between the predictors and response is highly non-linear.
  4. The variance of the error terms, i.e. \(\sigma^{2}=\mathrm{Var}(\epsilon),\) is extremely high.

(a) \(\mathbf{Solution.}\qquad\) The flexible model would perform better, because the large sample size allows it to fit more parameters. Since \(n\) is large, this reduces the risk of overfitting and the small number of predictors would also limit the variance of the model.

   

(b) \(\mathbf{Solution.}\qquad\) The flexible model would perform worse since it is likely to overfit the data.

   

(c) \(\mathbf{Solution.}\qquad\) The flexible model would perform better since it can fit non-linear relationships better than the inflexible model.

   

(d) \(\mathbf{Solution.}\qquad\) The flexible model would perform worse, because high variance in the error term implies that the data is very noisy and the flexible model is likely to fit this noise whereas the inflexible model is less likely to overfit the noise.


Question 2

Use the \(k\) -nearest neighbor classifier on the diabetes dataset. In particular, consider \(k=1,2, \ldots, 20 .\) Show both the training and test errors for each choice and report your findings.

Hint: Note the prediction/input variables are of different units and scales. Therefore, standardization is necessary before applying the KNN method. Please refer to the lab notes for details.

Limit your solutions to at most 5 pages (including code and figures).

\(\mathbf{Solution.}\qquad\) First we will read the training and test data in order for us to create a knn-classifier.

##   Pregnancies        Glucose      BloodPressure    SkinThickness  
##  Min.   : 0.000   Min.   :  0.0   Min.   :  0.00   Min.   : 0.00  
##  1st Qu.: 1.000   1st Qu.:103.0   1st Qu.: 64.00   1st Qu.: 0.00  
##  Median : 3.000   Median :123.0   Median : 72.00   Median :22.50  
##  Mean   : 4.054   Mean   :124.8   Mean   : 69.67   Mean   :20.07  
##  3rd Qu.: 7.000   3rd Qu.:145.0   3rd Qu.: 80.00   3rd Qu.:32.00  
##  Max.   :17.000   Max.   :199.0   Max.   :114.00   Max.   :99.00  
##     Insulin            BMI        DiabetesPedigreeFunction      Age       
##  Min.   :  0.00   Min.   : 0.00   Min.   :0.0780           Min.   :21.00  
##  1st Qu.:  0.00   1st Qu.:27.88   1st Qu.:0.2537           1st Qu.:25.00  
##  Median :  0.00   Median :32.50   Median :0.4025           Median :31.00  
##  Mean   : 84.07   Mean   :32.55   Mean   :0.5023           Mean   :34.33  
##  3rd Qu.:130.00   3rd Qu.:36.80   3rd Qu.:0.6750           3rd Qu.:41.25  
##  Max.   :846.00   Max.   :59.40   Max.   :2.4200           Max.   :81.00  
##         Outcome   
##  Diabetes   :223  
##  No_Diabetes:205  
##                   
##                   
##                   
## 

The summary table above reveals that some minimum values are equal to 0 0, such as BloodPressure, Glucose, SkinThickness, and BMI. Since it is not possible to have 0 value for these variables, we exclude these observations from the training set and test set.

Next we will separate the outcome variable from the design matrix and scale the data appropriately.

Finally we can make predictions for values of \(k=1,\ldots,20\) and store the training and test errors as vectors.

Below we show the error plots as a function of \(1/k\).

The optimal value of \(k\) corresponds to the value that minimizes prediction error on the test set. Below we extract train and test errors and display their confusion matrices.

##              train_label
## pred_train    Diabetes No_Diabetes
##   Diabetes         150           0
##   No_Diabetes        0         133
## [1] 1
##              test_label
## pred_test     Diabetes No_Diabetes
##   Diabetes          23          12
##   No_Diabetes        7          32
## [1] 0.7432432

Based on the training and test error plots, we can see that the test error is minimized when \(\frac{1}{k}=1\), in other words \(k=1\) nearest neighbors. For this value of \(k\), our highest accuracy on the test set is 0.743. Note: The value of \(k\) might differ slightly if we use a different seed for the random number generator. This is because, when there is a tie on the nearest neighbors, the knn classifier will choose randomly between the two classes.