Quick Start Guide: Optimal Prescriptive Trees
This is an R version of the corresponding OptimalTrees quick start guide.
In this example we will give a demonstration of how to use Optimal Prescriptive Trees (OPT). We will examine the impact of job training on annual earnings using the Lalonde sample from the National Supported Work Demonstration dataset.
First we load in the data:
colnames <- c("treatment", "age", "education", "black", "hispanic", "married",
"nodegree", "earnings_1975", "earnings_1978")
df_control <- read.table('nsw_control.txt', col.names = colnames)
df_treated <- read.table('nsw_treated.txt', col.names = colnames)
df <- rbind(df_control, df_treated)
treatment age education black hispanic married nodegree earnings_1975
1 0 23 10 1 0 0 1 0.000
2 0 26 12 0 0 0 0 0.000
3 0 22 9 1 0 0 1 0.000
4 0 34 9 1 0 0 1 4368.413
5 0 18 9 1 0 0 1 0.000
6 0 45 11 1 0 0 1 0.000
earnings_1978
1 0.00
2 12383.68
3 0.00
4 14051.16
5 10740.08
6 11796.47
[ reached 'max' / getOption("max.print") -- omitted 716 rows ]
Data for prescriptive problems
Prescriptive trees are trained on observational data, and require three distinct types of data:
X
: the features for each observation that can be used as the splits in the tree - this can be a matrix or a dataframe as for classification or regression problemstreatments
: the treatment applied to each observation - this is a vector of the treatment labels similar to the target in a classification problemoutcomes
: the outcome for each observation under the applied treatment - this is a vector of numeric values similar to the target in a regression problem
Refer to the documentation on data preparation for more information on the data format.
In this case, the treatment is whether or not the subject received job training, and the outcome is their 1978 earnings (which we are trying to maximize):
X <- df[, 2:(ncol(df) - 1)]
treatments <- ifelse(df$treatment == 1, "training", "no training")
outcomes <- df$earnings_1978
We can now split into training and test datasets:
split <- iai::split_data("prescription_maximize", X, treatments, outcomes,
seed = 2)
train_X <- split$train$X
train_treatments <- split$train$treatments
train_outcomes <- split$train$outcomes
test_X <- split$test$X
test_treatments <- split$test$treatments
test_outcomes <- split$test$outcomes
Note that we have used the default 70%/30% split, but in many prescriptive problems it is desirable to save more data for testing to ensure high-quality reward estimation on the test set.
Fitting Optimal Prescriptive Trees
We will use a grid_search
to fit an optimal_tree_prescription_maximizer
(note that if we were trying to minimize the outcomes, we would use optimal_tree_prescription_minimizer
):
grid <- iai::grid_search(
iai::optimal_tree_prescription_maximizer(
prescription_factor = 1,
treatment_minbucket = 20,
random_seed = 234,
),
max_depth = 1:5,
)
iai::fit(grid, train_X, train_treatments, train_outcomes)
iai::get_learner(grid)
Here, we have set prescription_factor = 1
to focus the trees on maximizing the outcome, and treatment_minbucket = 10
so that the tree can only prescribe a treatment in a leaf if there are at least 10 subjects in that leaf that received this treatment. This is to ensure that we have sufficient data on how the treatment affects subjects in this leaf before we can prescribe it.
In the resulting tree, the color in each leaf indicates which treatment is deemed to be stronger in this leaf, and the color intensity indicates the size of the difference. The tree contains some interesting insights about the effect of training, for example:
- Node 17 is where the training had the weakest effect, which is for older subjects with high earnings in 1975. This seems to make sense, as these people are likely the least in need of training.
- Node 10 shows that those with low 1975 earnings, at least 9 years of education, and at least 28 years old benefitted greatly from the training.
- Nodes 6 through 8 show that for those with at least 9 year of education and 1975 earnings below $1103, the effectiveness of the training was highly linked to the age of the subject, with older subjects benefitting much more.
We can make predictions on new data using predict
:
pred <- iai::predict(grid, test_X)
This returns the treatment prescribed for each subject as well as the outcome predicted for each subject under the prescribed treatment:
pred_treatments <- pred$treatments
[1] "training" "no training" "no training" "training" "training"
[6] "training" "no training" "training" "no training" "training"
[11] "training" "no training" "training" "training" "training"
[16] "training" "no training" "no training" "training" "no training"
[21] "training" "no training" "training" "no training" "training"
[26] "no training" "training" "training" "training" "training"
[31] "training" "no training" "no training" "no training" "no training"
[36] "training" "no training" "no training" "training" "training"
[41] "training" "no training" "no training" "no training" "no training"
[46] "no training" "training" "training" "training" "training"
[51] "training" "no training" "training" "no training" "training"
[56] "no training" "no training" "no training" "training" "no training"
[ reached getOption("max.print") -- omitted 156 entries ]
pred_outcomes <- pred$outcomes
[1] 6837.618 6779.621 4697.901 11123.963 5669.544 6837.618 7402.351
[8] 11123.963 3288.427 11123.963 6837.618 4697.901 5669.544 6837.618
[15] 5669.544 6837.618 6779.621 5314.847 6837.618 9646.778 11123.963
[22] 9646.778 6837.618 9646.778 6837.618 4697.901 6837.618 6837.618
[29] 6837.618 6837.618 6837.618 9646.778 3288.427 7402.351 6779.621
[36] 6837.618 9646.778 4697.901 6837.618 5669.544 6837.618 4697.901
[43] 6779.621 4697.901 6779.621 3288.427 11123.963 6837.618 6837.618
[50] 11123.963 6837.618 3288.427 5669.544 6779.621 6837.618 4697.901
[57] 9646.778 9646.778 5669.544 9646.778
[ reached getOption("max.print") -- omitted 156 entries ]
You can also use predict_outcomes
to get the predicted outcomes for all treatments:
iai::predict_outcomes(grid, test_X)
no training training
1 3106.896 6837.618
2 6779.621 2578.706
3 4697.901 2769.288
4 4754.785 11123.963
5 3351.409 5669.544
6 3106.896 6837.618
7 7402.351 3963.328
8 4754.785 11123.963
9 3288.427 9181.285
10 4754.785 11123.963
11 3106.896 6837.618
12 4697.901 2769.288
13 3351.409 5669.544
14 3106.896 6837.618
15 3351.409 5669.544
16 3106.896 6837.618
17 6779.621 2578.706
18 5314.847 3427.748
19 3106.896 6837.618
20 9646.778 6320.100
21 4754.785 11123.963
22 9646.778 6320.100
23 3106.896 6837.618
24 9646.778 6320.100
25 3106.896 6837.618
26 4697.901 2769.288
27 3106.896 6837.618
28 3106.896 6837.618
29 3106.896 6837.618
30 3106.896 6837.618
[ reached 'max' / getOption("max.print") -- omitted 186 rows ]
Evaluating Optimal Prescriptive Trees
In prescription problems, it is complicated to evaluate the quality of a prescription policy because our data only contains the outcome for the treatment that was received. Because we don't know the outcomes for the treatments that were not received (known as the counterfactuals), we cannot simply evaluate our prescriptions against the test set as we normally do.
A common approach to resolve this problem is reward estimation, where so-called rewards are estimated for each treatment for each observation. These rewards indicate the relative credit a model should be given for prescribing each treatment to each observation, and thus can be used to evaluate the quality of the prescription policy. For more details on how the reward estimation procedure is conducted, refer to the reward estimation documentation.
We will use a categorical_reward_estimator
to estimate the rewards (note that we are passing in the test data rather the training data to ensure we get a fair out-of-sample evaluation):
reward_lnr <- iai::categorical_reward_estimator(
propensity_estimation_method = "random_forest",
outcome_estimation_method = "random_forest",
reward_estimation_method = "doubly_robust",
random_seed = 1,
)
rewards <- iai::fit_predict(reward_lnr, test_X, test_treatments, test_outcomes)
no training training
1 13466.14618 5840.714
2 2950.44706 5562.140
3 10780.62503 4377.647
4 17391.48562 5864.285
5 -599.99004 5225.245
6 6346.62663 15981.110
7 -847.69912 3445.859
8 -469.11793 5503.624
9 -1836.01236 7905.532
10 1586.34820 5100.385
11 17416.11235 4879.462
12 10378.60362 6200.316
13 -337.29477 7499.461
14 17117.64083 5864.029
15 5589.84647 4173.271
16 31.59942 2100.046
17 6388.21546 4067.652
18 1685.01892 5225.815
19 6488.22346 5763.884
20 -2086.81416 2603.910
21 8173.40516 14579.700
22 -2236.90502 6278.348
23 616.79681 4985.877
24 10378.53776 10015.613
25 7603.21607 2371.283
26 -257.77528 2887.120
27 -2080.50426 3777.181
28 8433.54180 7665.401
29 -263.00484 4314.367
30 -941.05120 2423.764
[ reached 'max' / getOption("max.print") -- omitted 186 rows ]
We can now use these reward values to evaluate the prescription in many ways. For example, we might like to see the mean reward achieved across all prescriptions on the test set:
evaluate_reward_mean <- function(treatments, rewards) {
total <- 0.0
for (i in seq(treatments)) {
total <- total + rewards[i, treatments[i]]
}
total / length(treatments)
}
evaluate_reward_mean(pred_treatments, rewards)
[1] 5963.84
For comparison's sake, we can compare this to the mean reward achieved under the actual treatment assignments that were observed in the data:
evaluate_reward_mean(test_treatments, rewards)
[1] 5494.877
We can see that the prescriptive tree policy indeed achieves better results than the actual assignments.