Tips and Tricks

This page contains some tips and tricks for getting the best results out of Optimal Trees.

Parallelization

OptimalTrees.jl is set up to easily train trees in parallel across multiple processes or machines. For details see the IAIBase documentation on parallelization.

Whenever OptimalTrees is training trees it will automatically parallelize the training across all worker processes in the Julia session. Increasing the number of workers leads to a roughly linear speedup in training, so training with three workers (four processes) will give a roughly 4x speedup.

Choosing criterion for Classification Trees

As mentioned in the parameter tuning guide, it is often important to select a value for the criterion parameter. Optimal Classification Trees use :misclassification as the default training criterion, which works well in most cases where the goal is to predict the correct class. However, this criterion may not give the best solution if goal of the model is to predict probabilities as accurately as possible.

To illustrate this, consider an example where the label probability distribution is proportional to the feature x1:

using StableRNGs  # for consistent RNG output across all Julia versions
rng = StableRNG(1)
X = rand(rng, 1000, 1)
y = [rand(rng) < X[i, 1] for i in 1:size(X, 1)]

Now, we train with the default :misclassification criterion:

grid = IAI.GridSearch(
    IAI.OptimalTreeClassifier(random_seed=1),
    max_depth=1:5,
)
IAI.fit!(grid, X, y)
Optimal Trees Visualization

We observe that the tree only has one split at x1 < 0.5121.

For comparison, we will train again with :gini (:entropy would also work):

grid2 = IAI.GridSearch(
    IAI.OptimalTreeClassifier(
        random_seed=1,
        criterion=:gini,
    ),
    max_depth=1:5,
)
IAI.fit!(grid2, X, y)
Optimal Trees Visualization

We see that with :gini as the training criterion we find a tree with more splits. Note that the first split is the same, and that both leaves on the lower side of this first split predict true while those on the upper side predict false. The new splits further refine the predicted probability. This is consistent with how the data was generated.

Comparing the trees, we can understand how the different values of criterion affect the output. After the first split, the tree trained with :misclassification does not split any further, as these splits would not change the predicted label for any point, and thus make no difference to the overall misclassification. The tree chooses not to include these splits as they increase the complexity for no improvement in training score. On the other hand, the tree trained with :gini does improve its training score by splitting further, as the score is calculated using the probabilities rather than the predicted label.

We can compare the AUC of each method:

IAI.score(grid, X, y, criterion=:auc), IAI.score(grid2, X, y, criterion=:auc)
(0.7970557749950976, 0.8568106963770465)

As we would expect, the tree trained with :gini has significantly higher AUC, as a result of having more refined probability estimates. This demonstrates the importance of choosing a value for criterion that is aligned with how you intend to evaluate and use the model.

Unbalanced Data

Imbalances in class labels can cause difficulties during model fitting. We will use the Climate Model Simulation Crashes dataset as an example:

using CSV, DataFrames
df = CSV.read("pop_failures.dat", DataFrame, delim=" ", ignorerepeated=true)
X = df[:, 3:20]
y = df[:, 21]

Taking a look at the target variable, we see the data is very unbalanced (91% of values are 1):

using Statistics
mean(y)
0.9148148148148149

Let's see what happens if we try to fit to this data

(train_X, train_y), (test_X, test_y) = IAI.split_data(:classification, X, y,
                                                      seed=1)
grid = IAI.GridSearch(
    IAI.OptimalTreeClassifier(
        random_seed=1,
    ),
    max_depth=1:5,
)
IAI.fit!(grid, train_X, train_y)
IAI.score(grid, test_X, test_y, criterion=:auc)
0.6894305019305019

We see that the performance of the model is not particularly strong, and it is possible that this is due to the class imbalance.

The IAIBase documentation outlines multiple strategies that we can use to try to improve performance on unbalanced data. First, we can try using an alternative scoring criterion when training the model. Typically we see better performance on unbalanced data when using either gini impurity or entropy as the scoring criterion:

grid = IAI.GridSearch(
    IAI.OptimalTreeClassifier(
        random_seed=1,
        criterion=:gini,
    ),
    max_depth=1:5,
)
IAI.fit!(grid, train_X, train_y)
IAI.score(grid, test_X, test_y, criterion=:auc)
0.8436293436293436

We can see this has improved the out-of-sample performance significantly.

Another approach we can use to improve the performance in an unbalanced scenario is to use the :autobalance option for sample_weight to automatically adjust and balance the label distribution:

grid = IAI.GridSearch(
    IAI.OptimalTreeClassifier(
        random_seed=1,
    ),
    max_depth=1:5,
)
IAI.fit!(grid, train_X, train_y, sample_weight=:autobalance)
IAI.score(grid, test_X, test_y, criterion=:auc)
0.8315637065637066

This approach has also increased the out-of-sample performance significantly.

One thing to note when using :autobalance (or any sample weight adjustment) is that it can affect interpretation of the predicted probabilities returned by predict_proba or get_classification_proba. As a concrete example, let us look at the tree that resulted from training with autobalanced sample weights:

lnr = IAI.get_learner(grid)
Optimal Trees Visualization

We see that predicted probabilities of having label 1 in the leaves of the tree are 99.08%, 98.82% and 61.33%. However, when using predict_proba or get_classification_proba, we see different numbers:

IAI.get_classification_proba(lnr, 2)
Dict{Int64, Float64} with 2 entries:
  0 => 0.0910047
  1 => 0.908995
IAI.get_classification_proba(lnr, 4)
Dict{Int64, Float64} with 2 entries:
  0 => 0.114041
  1 => 0.885959
IAI.get_classification_proba(lnr, 5)
Dict{Int64, Float64} with 2 entries:
  0 => 0.872067
  1 => 0.127933

The reason for this difference is that the probabilities shown in the tree visualization are calculated based solely on the number of points of each label in the leaves, whereas the probabilities used for predictions are calculated in the weighted space after the labels have been rebalanced to have roughly equal importance. This discrepancy will be present any time that sample weights are used during the tree training process.

If desired, refit_leaves! can be used to replace the weighted probability predictions inside the tree with the unweighted probabilities seen in the visualization:

IAI.refit_leaves!(lnr, X, y)
Optimal Trees Visualization

Now, the predicted probabilities are the same as those shown in the tree:

IAI.get_classification_proba(lnr, 2)
Dict{Int64, Float64} with 2 entries:
  0 => 0.0130719
  1 => 0.986928

Different Regularization Schemes for Regression Trees

When running Optimal Regression Trees with linear regression predictions in the leaves (via regression_features), the linear regression models are fit using regularization to limit the degree of overfitting. By default, the function that is minimized during training is

\[\min \left\{ \text{error}(T, X, y) + \texttt{cp} * \text{complexity}(T) + \sum_{t} \| \boldsymbol\beta_t \|_1 \right\}\]

where $T$ is the tree, $X$ and $y$ are the training features and labels, respectively, $t$ are the leaves in the tree, and $\beta_t$ is the vector of regression coefficients in leaf $t$. In this way, the regularization applied in each leaf is a lasso penalty, and these are summed over the leaves to get the overall penalty. We are therefore penalizing the total complexity of the regression equations in the tree.

This regularization scheme is generally sufficient for fitting the regression equations in each leaf, as it only adds those regression coefficients that significantly improve the training error. However, there are classes of problems where this regularization limits the quality of the trees that can be found.

To illustrate this, consider the following univariate piecewise linear function:

\[y = \begin{cases} 10x & x < 0 \\ 11x & x \geq 0 \end{cases}\]

Note that this is exactly a regression tree with a single split and univariate regression predictions in each leaf.

We can generate data according to this function:

using DataFrames
x = -2:0.025:2
X = DataFrame(x=x)
y = map(v -> v > 0 ? 11v : 10v, x)

We will apply Optimal Regression Trees to learn this function, with the hope that the splits in the tree will allow us to model the breakpoints.

grid = IAI.GridSearch(
    IAI.OptimalTreeRegressor(
        random_seed=1,
        max_depth=1,
        minbucket=10,
        regression_features=All(),
        regression_lambda=0.01,
    ),
)
IAI.fit!(grid, X, y)
Optimal Trees Visualization

We see that the trained tree has no splits, preferring to just fit a single linear regression model across the entire domain, with the coefficient being roughly the average of 10 and 11. However, we know that the ideal model is really a tree with a single split at $x = 0$, with each leaf containing the appropriate coefficient (10 and 11, respectively)

The root cause of the tree having no splits is the regularization scheme we have applied: the regularization penalty applied to the tree with no splits is roughly 10.5, whereas the penalty applied to the ideal tree would be 21. This means that if all else is equal, the tree with no splits would be preferred by the training process and selected before the ideal tree. We can therefore see that splitting in the tree to refine the estimates of coefficients (e.g. refining 10.5 to 10 and 11) is actually penalized under the regularization scheme used by default.

We can resolve this by using an alternative regularization scheme that penalizes the average complexity of the regression equations in the tree instead of the total complexity:

\[\min \left\{ \text{error}(T, X, y) + \texttt{cp} * \text{complexity}(T) + \sum_{t} \frac{n_t}{n} \| \boldsymbol\beta_t \|_1 \right\}\]

where $n$ is the number of training points, and $n_t$ is the number of training points in leaf $t$. This new regularization scheme penalizes the objective by the average lasso penalty in each leaf, weighted by the number of points contained in each leaf to give more weight to those leaves with more points. We can see that under this alternative regularization scheme, both the tree with no splits and the ideal tree would have very similar regression penalties, and so the ideal tree could now be selected as it has a better training error.

To see this in action, we set :regression_weighted_betas to true to enable this alternative regularization scheme:

grid = IAI.GridSearch(
    IAI.OptimalTreeRegressor(
        random_seed=1,
        max_depth=1,
        minbucket=10,
        regression_features=All(),
        regression_lambda=0.001,
        regression_weighted_betas=true,
    ),
)
IAI.fit!(grid, X, y)
Optimal Trees Visualization

We can see that indeed this regularization scheme enabled us to find the ideal tree for this problem.

In general, each of the regularization schemes can be better suited to different problems, so it can be valuable to try out both approaches to see which works best for a given problem.

Categorical Features with Many Levels

Sometimes the input data has categorical features with many levels (10+). We illustrate with an example where using the feature directly may be harmful, as it may result in overfitting and reduced interpretability.

In this example, we generate the predictor X which is a single categoric feature with 40 different levels, and the outcome y is a function of whether X is in some of the levels plus some noise.

using CategoricalArrays
using DataFrames
using StableRNGs

function make_data(n)
  rng = StableRNG(1)
  X = DataFrame(x1=rand(rng, 1:40, n))
  y = (X.x1 .>= 20) + 0.5 * randn(rng, n)
  X.x1 = categorical(X.x1)
  X, y
end
X, y = make_data(200)

If we train an Optimal Regression Tree directly with this categoric feature:

grid = IAI.GridSearch(
    IAI.OptimalTreeRegressor(random_seed=1),
    max_depth=1,
)
IAI.fit!(grid, X, y)
Optimal Trees Visualization