Tips and Tricks
This page contains some tips and tricks for getting the best results out of Optimal Trees.
Parallelization
OptimalTrees.jl is set up to easily train trees in parallel across multiple processes or machines. For details see the IAIBase documentation on parallelization.
Whenever OptimalTrees is training trees it will automatically parallelize the training across all worker processes in the Julia session. Increasing the number of workers leads to a roughly linear speedup in training, so training with three workers (four processes) will give a roughly 4x speedup.
Choosing criterion
for Classification Trees
As mentioned in the parameter tuning guide, it is often important to select a value for the criterion
parameter. Optimal Classification Trees use :misclassification
as the default training criterion, which works well in most cases where the goal is to predict the correct class. However, this criterion may not give the best solution if goal of the model is to predict probabilities as accurately as possible.
To illustrate this, consider an example where the label probability distribution is proportional to the feature x1
:
using StableRNGs # for consistent RNG output across all Julia versions
rng = StableRNG(1)
X = rand(rng, 1000, 1)
y = [rand(rng) < X[i, 1] for i in 1:size(X, 1)]
Now, we train with the default :misclassification
criterion:
grid = IAI.GridSearch(
IAI.OptimalTreeClassifier(random_seed=1),
max_depth=1:5,
)
IAI.fit!(grid, X, y)
We observe that the tree only has one split at x1 < 0.5121
.
For comparison, we will train again with :gini
(:entropy
would also work):
grid2 = IAI.GridSearch(
IAI.OptimalTreeClassifier(
random_seed=1,
criterion=:gini,
),
max_depth=1:5,
)
IAI.fit!(grid2, X, y)
Fitted OptimalTreeClassifier:
1) Split: x1 < 0.5121
2) Split: x1 < 0.2778
3) Predict: false (91.44%), [235,22], 257 points, error 0.1566
4) Predict: false (65.27%), [156,83], 239 points, error 0.4534
5) Split: x1 < 0.8217
6) Predict: true (71.92%), [89,228], 317 points, error 0.4039
7) Predict: true (95.19%), [9,178], 187 points, error 0.09162
We see that with :gini
as the training criterion we find a tree with more splits. Note that the first split is the same, and that both leaves on the lower side of this first split predict true
while those on the upper side predict false
. The new splits further refine the predicted probability. This is consistent with how the data was generated.
Comparing the trees, we can understand how the different values of criterion
affect the output. After the first split, the tree trained with :misclassification
does not split any further, as these splits would not change the predicted label for any point, and thus make no difference to the overall misclassification. The tree chooses not to include these splits as they increase the complexity for no improvement in training score. On the other hand, the tree trained with :gini
does improve its training score by splitting further, as the score is calculated using the probabilities rather than the predicted label.
We can compare the AUC of each method:
IAI.score(grid, X, y, criterion=:auc), IAI.score(grid2, X, y, criterion=:auc)
(0.7970557749950976, 0.8568106963770465)
As we would expect, the tree trained with :gini
has significantly higher AUC, as a result of having more refined probability estimates. This demonstrates the importance of choosing a value for criterion
that is aligned with how you intend to evaluate and use the model.
Unbalanced Data
Imbalances in class labels can cause difficulties during model fitting. We will use the Climate Model Simulation Crashes dataset as an example:
using CSV, DataFrames
df = CSV.read("pop_failures.dat", DataFrame, delim=" ", ignorerepeated=true)
X = df[:, 3:20]
y = df[:, 21]
Taking a look at the target variable, we see the data is very unbalanced (91% of values are 1):
using Statistics
mean(y)
0.9148148148148149
Let's see what happens if we try to fit to this data
(train_X, train_y), (test_X, test_y) = IAI.split_data(:classification, X, y,
seed=123)
grid = IAI.GridSearch(
IAI.OptimalTreeClassifier(
random_seed=123,
),
max_depth=1:5,
)
IAI.fit!(grid, train_X, train_y)
IAI.score(grid, test_X, test_y, criterion=:auc)
0.5
We see that the model could not find any model more predictive than simply guessing randomly, due to the class imbalance.
The IAIBase documentation outlines multiple strategies that we can use to try to improve performance on unbalanced data. First, we can try using the :autobalance
option for sample_weight
to automatically adjust and balance the label distribution:
IAI.fit!(grid, train_X, train_y, sample_weight=:autobalance)
IAI.score(grid, test_X, test_y, criterion=:auc)
0.6901544401544402
We can see this has improved the out-of-sample performance significantly.
Another approach we can use to improve the performance in an unbalanced scenario is to use an alternative scoring criterion when training the model. Typically we see better performance on unbalanced data when using either gini impurity or entropy as the scoring criterion:
grid = IAI.GridSearch(
IAI.OptimalTreeClassifier(
random_seed=123,
criterion=:gini,
),
max_depth=1:5,
)
IAI.fit!(grid, train_X, train_y)
IAI.score(grid, test_X, test_y, criterion=:auc)
0.8052606177606176
This approach has also increased the out-of-sample performance significantly.
Different Regularization Schemes for Regression Trees
When running Optimal Regression Trees with linear regression predictions in the leaves (:regression_sparsity
set to :all
), the linear regression models are fit using regularization to limit the degree of overfitting. By default, the function that is minimized during training is
\[\min \left\{ \text{error}(T, X, y) + \texttt{cp} * \text{complexity}(T) + \sum_{t} \| \boldsymbol\beta_t \|_1 \right\}\]
where $T$ is the tree, $X$ and $y$ are the training features and labels, respectively, $t$ are the leaves in the tree, and $\beta_t$ is the vector of regression coefficients in leaf $t$. In this way, the regularization applied in each leaf is a lasso penalty, and these are summed over the leaves to get the overall penalty. We are therefore penalizing the total complexity of the regression equations in the tree.
This regularization scheme is generally sufficient for fitting the regression equations in each leaf, as it only adds those regression coefficients that significantly improve the training error. However, there are classes of problems where this regularization limits the quality of the trees that can be found.
To illustrate this, consider the following univariate piecewise linear function:
\[y = \begin{cases} 10x & x < 0 \\ 11x & x \geq 0 \end{cases}\]
Note that this is exactly a regression tree with a single split and univariate regression predictions in each leaf.
We can generate data according to this function:
using DataFrames
x = -2:0.025:2
X = DataFrame(x=x)
y = map(v -> v > 0 ? 11v : 10v, x)
We will apply Optimal Regression Trees to learn this function, with the hope that the splits in the tree will allow us to model the breakpoints.
grid = IAI.GridSearch(
IAI.OptimalTreeRegressor(
random_seed=1,
max_depth=1,
minbucket=10,
regression_sparsity=:all,
regression_lambda=0.01,
),
)
IAI.fit!(grid, X, y)
We see that the trained tree has no splits, preferring to just fit a single linear regression model across the entire domain, with the coefficient being roughly the average of 10 and 11. However, we know that the ideal model is really a tree with a single split at $x = 0$, with each leaf containing the appropriate coefficient (10 and 11, respectively)
The root cause of the tree having no splits is the regularization scheme we have applied: the regularization penalty applied to the tree with no splits is roughly 10.5, whereas the penalty applied to the ideal tree would be 21. This means that if all else is equal, the tree with no splits would be preferred by the training process and selected before the ideal tree. We can therefore see that splitting in the tree to refine the estimates of coefficients (e.g. refining 10.5 to 10 and 11) is actually penalized under the regularization scheme used by default.
We can resolve this by using an alternative regularization scheme that penalizes the average complexity of the regression equations in the tree instead of the total complexity:
\[\min \left\{ \text{error}(T, X, y) + \texttt{cp} * \text{complexity}(T) + \sum_{t} \frac{n_t}{n} \| \boldsymbol\beta_t \|_1 \right\}\]
where $n$ is the number of training points, and $n_t$ is the number of training points in leaf $t$. This new regularization scheme penalizes the objective by the average lasso penalty in each leaf, weighted by the number of points contained in each leaf to give more weight to those leaves with more points. We can see that under this alternative regularization scheme, both the tree with no splits and the ideal tree would have very similar regression penalties, and so the ideal tree could now be selected as it has a better training error.
To see this in action, we set :regression_weighted_betas
to true
to enable this alternative regularization scheme:
grid = IAI.GridSearch(
IAI.OptimalTreeRegressor(
random_seed=1,
max_depth=1,
minbucket=10,
regression_sparsity=:all,
regression_lambda=0.001,
regression_weighted_betas=true,
),
)
IAI.fit!(grid, X, y)
We can see that indeed this regularization scheme enabled us to find the ideal tree for this problem.
In general, each of the regularization schemes can be better suited to different problems, so it can be valuable to try out both approaches to see which works best for a given problem.
Categorical Variables with Many Levels
Sometimes the input data has categorical variables with many levels (10+). We illustrate with an example where using the variable directly may be harmful, as it may result in overfitting and reduced interpretability.
In this example, we generate the predictor X
which is a single categoric variable with 40 different levels, and the outcome y
is a function of whether X
is in some of the levels plus some noise.
using CategoricalArrays
using DataFrames
using StableRNGs
function make_data(n)
rng = StableRNG(1)
X = DataFrame(x1=rand(rng, 1:40, n))
y = (X.x1 .>= 20) + 0.5 * randn(rng, n)
X.x1 = categorical(X.x1)
X, y
end
X, y = make_data(200)
If we train an Optimal Regression Tree directly with this categoric variable:
grid = IAI.GridSearch(
IAI.OptimalTreeRegressor(random_seed=1),
max_depth=1,
)
IAI.fit!(grid, X, y)
The tree we get does not recover the splits exactly. The split includes 40 and excludes 9 incorrectly. This is likely a result of the many options for splitting, where the model has too much freedom to overfit to the noisy data.
We evaluate the tree both in-sample and out-of-sample (using a newly-generated and larger dataset):
ins = IAI.score(grid, X, y)
test_X, test_y = make_data(10000)
oos = IAI.score(grid, test_X, test_y)
ins, oos
(0.4858678938820036, 0.40189085949759884)
We see that the out-of-sample performance is 0.08 lower than in-sample, a strong indication of overfitting.
One remedy to this problem is reducing the number of levels in the categoric variable. For instance, it may be possible to combine the levels into similar groups. In this case, we consider grouping the levels based on the first digit in the level, before refitting the tree:
X.x1 = CategoricalVector(floor.(get.(X.x1) ./ 10))
test_X.x1 = CategoricalVector(floor.(get.(test_X.x1) ./ 10))
IAI.fit!(grid, X, y)
Fitted OptimalTreeRegressor:
1) Split: x1 in [2.0,3.0]
2) Predict: 1.029, 111 points, error 0.2807
3) Predict: 0.02152, 89 points, error 0.2498
We recovered the correct split, as the tree is no longer allowed to mix-and-match levels but rather has to send all levels in a group on the same side of the split. This means it cannot overfit and choose to send 9 in a different direction to levels 1–8.
Again, we can inspect the in-sample and out-of-sample performances:
ins = IAI.score(grid, X, y)
oos = IAI.score(grid, test_X, test_y)
ins, oos
(0.4842977090166958, 0.45561923419854455)
We see that the out-of-sample performance is a lot higher than using all the levels directly, bridging the overfitting gap.
Suggestions for handling categorical variables with many levels
- It is important to inspect the data and check if there are categorical variables that have many levels, as this is often problematic. It could be an ID variable, which should be removed. There could be many levels due to typos or variations of the same levels, in which case these should be corrected.
- If there are categoric variables with many levels, they are often too granular for the model to learn anything. A model trained with these variables can overfit, and may not be very useful for prediction. Additionally, we need to think about the potential for unseen levels when making predictions.
- As a remedy, we can combine some levels into higher-level groups. For example, we can map ZIP codes to Counties or States, or map States to broader regions. Another simple method where there is not an obvious grouping is that we can group the rare levels together as "Other".
- We can also try running a model without this feature. Sometimes categoric features with many levels can dominate the tree-fitting process and drive other features out, leading to overfitting and less meaningful trees. It is often the case that the performance is the same or higher with these features removed.