Quick Start Guide: Optimal Feature Selection for Classification

This is an R version of the corresponding OptimalFeatureSelection quick start guide.

In this example we will use Optimal Feature Selection on the Mushroom dataset, where the goal is to distinguish poisonous from edible mushrooms.

First we load in the data and split into training and test datasets:

df <- read.table(
    "agaricus-lepiota.data",
    sep = ",",
    col.names = c("target", "cap_shape", "cap_surface", "cap_color",
                  "bruises", "odor", "gill_attachment", "gill_spacing",
                  "gill_size", "gill_color", "stalk_shape", "stalk_root",
                  "stalk_surface_above", "stalk_surface_below",
                  "stalk_color_above", "stalk_color_below", "veil_type",
                  "veil_color", "ring_number", "ring_type", "spore_color",
                  "population", "habitat"),
    stringsAsFactors = T,
)
  target cap_shape cap_surface cap_color bruises odor gill_attachment
1      p         x           s         n       t    p               f
2      e         x           s         y       t    a               f
  gill_spacing gill_size gill_color stalk_shape stalk_root stalk_surface_above
1            c         n          k           e          e                   s
2            c         b          k           e          c                   s
  stalk_surface_below stalk_color_above stalk_color_below veil_type veil_color
1                   s                 w                 w         p          w
2                   s                 w                 w         p          w
  ring_number ring_type spore_color population habitat
1           o         p           k          s       u
2           o         p           n          n       g
 [ reached 'max' / getOption("max.print") -- omitted 8122 rows ]
X <- df[, 2:23]
y <- df[, 1]
split <- iai::split_data("classification", X, y, seed = 1)
train_X <- split$train$X
train_y <- split$train$y
test_X <- split$test$X
test_y <- split$test$y

Model Fitting

We will use a grid_search to fit an optimal_feature_selection_classifier:

grid <- iai::grid_search(
    iai::optimal_feature_selection_classifier(
        random_seed = 1,
    ),
    sparsity = 1:10,
)
iai::fit(grid, train_X, train_y, validation_criterion = "auc")
Julia Object of type GridSearch{OptimalFeatureSelectionClassifier, IAIBase.NullGridResult}.
All Grid Results:

 Row │ sparsity  train_score  valid_score  rank_valid_score
     │ Int64     Float64      Float64      Int64
─────┼──────────────────────────────────────────────────────
   1 │        1     0.530592     0.883816                10
   2 │        2     0.72389      0.969787                 9
   3 │        3     0.775804     0.979567                 8
   4 │        4     0.810232     0.983253                 7
   5 │        5     0.812457     0.987602                 6
   6 │        6     0.877244     0.998135                 4
   7 │        7     0.885988     0.998865                 3
   8 │        8     0.883667     0.990586                 5
   9 │        9     0.924885     0.999374                 2
  10 │       10     0.937081     0.999595                 1

Best Params:
  sparsity => 10

Best Model - Fitted OptimalFeatureSelectionClassifier:
  Constant: -0.0939951
  Weights:
    gill_color==b:   1.84315
    gill_size==n:    1.74969
    odor==a:        -3.57733
    odor==c:         1.95562
    odor==f:         3.18953
    odor==l:        -3.59124
    odor==n:        -3.35133
    odor==p:         1.96283
    spore_color==r:  5.90712
    stalk_root==c:   0.381557
  (Higher score indicates stronger prediction for class `p`)

The model selected a sparsity of 10 as the best parameter, but we observe that the validation scores are close for many of the parameters. We can use the results of the grid search to explore the tradeoff between the complexity of the regression and the quality of predictions:

plot(grid, type = "validation")

We see that the quality of the model quickly increases with as features are adding, reaching AUC 0.98 with 3 features. After this, additional features increase the quality more slowly, eventually reaching AUC close to 1 with 9 features. Depending on the application, we might decide to choose a lower sparsity for the final model than the value chosen by the grid search.

We can see the relative importance of the selected features with variable_importance:

iai::variable_importance(iai::get_learner(grid))
         Feature Importance
1         odor_n 0.22241104
2         odor_f 0.18802063
3    gill_size_n 0.10870240
4         odor_l 0.10579560
5         odor_a 0.10205411
6   gill_color_b 0.10141060
7  spore_color_r 0.07236780
8         odor_p 0.04663120
9         odor_c 0.03972353
10  stalk_root_c 0.01288309
11     bruises_t 0.00000000
12   cap_color_b 0.00000000
13   cap_color_c 0.00000000
14   cap_color_e 0.00000000
15   cap_color_g 0.00000000
16   cap_color_n 0.00000000
17   cap_color_p 0.00000000
18   cap_color_r 0.00000000
19   cap_color_u 0.00000000
20   cap_color_w 0.00000000
21   cap_color_y 0.00000000
22   cap_shape_b 0.00000000
23   cap_shape_c 0.00000000
24   cap_shape_f 0.00000000
25   cap_shape_k 0.00000000
26   cap_shape_s 0.00000000
27   cap_shape_x 0.00000000
28 cap_surface_f 0.00000000
29 cap_surface_g 0.00000000
30 cap_surface_s 0.00000000
 [ reached 'max' / getOption("max.print") -- omitted 82 rows ]

We can also look at the feature importance across all sparsity levels:

plot(grid, type = "importance")

We can make predictions on new data using predict:

iai::predict(grid, test_X)
 [1] "e" "e" "e" "e" "e" "e" "e" "p" "e" "e" "p" "e" "e" "e" "e" "e" "e" "e" "e"
[20] "e" "e" "p" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e"
[39] "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e" "e"
[58] "e" "e" "e"
 [ reached getOption("max.print") -- omitted 2377 entries ]

We can evaluate the quality of the model using score with any of the supported loss functions. For example, the misclassification on the training set:

iai::score(grid, train_X, train_y, criterion = "misclassification")
[1] 0.9941973

Or the AUC on the test set:

iai::score(grid, test_X, test_y, criterion = "auc")
[1] 0.9997498

We can plot the ROC curve on the test set as an interactive visualization:

roc <- iai::roc_curve(grid, test_X, test_y, positive_label = "p")
ROC

We can also plot the same ROC curve as a static image:

plot(roc)