Quick Start Guide: Optimal Policy Trees with Categorical Treatment
In this guide we will give a demonstration of how to use Optimal Policy Trees with categoric treatment options. For this example, we will use the Credit Approval dataset, where the task is to predict the approval outcome of credit card application. Because the features have been anonymized for confidentiality reasons, we will arbitrarily select one of the categoric variables A1
to be the treatment.
Note: this case is not intended to serve as a practical application of policy trees, but rather to serve as an illustration of the training and evaluation process.
First we load in the data and drop 37 rows with missing values:
using CSV, DataFrames
df = CSV.read("crx.data", DataFrame, header=string.("A", 1:16),
missingstring="?", pool=true)
dropmissing!(df)
653×16 DataFrame
Row │ A1 A2 A3 A4 A5 A6 A7 A8 A9 ⋯
│ String Float64 Float64 String String String String Float64 Stri ⋯
─────┼──────────────────────────────────────────────────────────────────────────
1 │ b 30.83 0.0 u g w v 1.25 t ⋯
2 │ a 58.67 4.46 u g q h 3.04 t
3 │ a 24.5 0.5 u g q h 1.5 t
4 │ b 27.83 1.54 u g w v 3.75 t
5 │ b 20.17 5.625 u g w v 1.71 t ⋯
6 │ b 32.08 4.0 u g m v 2.5 t
7 │ b 33.17 1.04 u g r h 6.5 t
8 │ a 22.92 11.585 u g cc v 0.04 t
⋮ │ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱
647 │ b 36.42 0.75 y p d v 0.585 f ⋯
648 │ b 40.58 3.29 u g m v 3.5 f
649 │ b 21.08 10.085 y p e h 1.25 f
650 │ a 22.67 0.75 u g c v 2.0 f
651 │ a 25.25 13.5 y p ff ff 2.0 f ⋯
652 │ b 17.92 0.205 u g aa v 0.04 f
653 │ b 35.0 3.375 u g c h 8.29 f
8 columns and 638 rows omitted
Policy trees are trained using a features matrix/dataframe X
as usual and a rewards matrix that has one column for each potential treatment that contains the outcome for each sample under that treatment.
There are two ways to get this rewards matrix:
- in rare cases, the problem may have full information about the outcome associated with each treatment for each sample
- more commonly, we have observational data, and use this partial data to train models to estimate the outcome associated with each treatment
Refer to the documentation on data preparation for more information on the data format.
In this case, the dataset is observational, and so we will use RewardEstimation to estimate our rewards matrix.
Reward Estimation
First, we split into training and testing:
X = select(df, Not([:A16, :A1]))
treatments = df.A1
outcomes = df.A16 .== "+"
(train_X, train_treatments, train_outcomes), (test_X, test_treatments, test_outcomes) =
IAI.split_data(:policy_maximize, X, treatments, outcomes, seed=123, train_proportion=0.5)
Note that we have used a training/test split of 50%/50%, so that we save more data for testing to ensure high-quality reward estimation on the test set.
The treatment, A1
, is a categoric variable, so we follow the process for estimating rewards with categorical treatments.
Our outcome is binary, so we use a CategoricalClassificationRewardEstimator
to estimate the approval outcome under each option with a doubly-robust reward estimation method, using random forests to estimate both propensity scores and outcomes:
reward_lnr = IAI.CategoricalClassificationRewardEstimator(
propensity_estimator=IAI.RandomForestClassifier(),
outcome_estimator=IAI.RandomForestClassifier(),
reward_estimator=:doubly_robust,
random_seed=123,
)
train_predictions, train_reward_score = IAI.fit_predict!(
reward_lnr, train_X, train_treatments, train_outcomes,
propensity_score_criterion=:auc, outcome_score_criterion=:auc)
train_rewards = train_predictions[:reward]
326×2 DataFrame
Row │ a b
│ Float64 Float64
─────┼────────────────────────
1 │ 1.16099 1.0
2 │ 0.51 1.00221
3 │ 0.41 1.29163
4 │ 0.59 1.08292
5 │ 1.2115 0.78
6 │ 0.31 1.09813
7 │ 0.7 1.05226
8 │ 0.04 1.18247
⋮ │ ⋮ ⋮
320 │ 0.15 -0.0550066
321 │ -0.0943777 0.19
322 │ -0.0379918 0.062
323 │ 0.06 -0.0444157
324 │ 0.24 -0.15288
325 │ 0.18 -0.157814
326 │ -2.9726 0.28
311 rows omitted
train_reward_score
Dict{Symbol, Any} with 2 entries:
:propensity => 0.612973
:outcome => Dict("b"=>0.9197, "a"=>0.955762)
We can see that the internal outcome estimation models have AUCs of 0.92 and 0.96, which gives us confidence that the reward estimates are of decent quality, and good to base our training on. The AUC for the propensity model is lower at 0.61, which is not particularly high, and suggests difficulty in reliably estimating the propensity. The doubly-robust estimation method should help to alleviate this problem, as it is designed to deliver good results if either propensity scores or outcomes are estimated well. However, we should always pay attention to these scores and proceed with caution if the estimation quality is low.
Optimal Policy Trees
Now that we have a complete rewards matrix, we can train a tree to learn an optimal prescription policy that maximizes credit approval outcome. We will use a GridSearch
to fit an OptimalTreePolicyMaximizer
(note that if we were trying to minimize the outcomes, we would use OptimalTreePolicyMinimizer
):
grid = IAI.GridSearch(
IAI.OptimalTreePolicyMaximizer(
random_seed=121,
max_categoric_levels_before_warning=20,
),
max_depth=1:5,
)
IAI.fit!(grid, train_X, train_rewards)
Fitted OptimalTreePolicyMaximizer:
1) Split: A8 < 1.438
2) Split: A6 in [aa,c,cc,ff,j,k,w,x] or is missing
3) Prescribe: a, 127 points, error 5.262
4) Prescribe: b, 56 points, error 5.255
5) Split: A6 in [d,e,x]
6) Prescribe: a, 21 points, error 4.801
7) Prescribe: b, 122 points, error 4.942
The resulting tree recommends different values for A1
based on other characteristics: namely, the values of A6
, and A8
. The intensity of the color in each leaf shows the difference in quality between the best and second-best values. We can see that both a
and b
values are prescribed by the tree, implying each treatment is better suited to certain subgroups.
We can make treatment prescriptions using predict
:
IAI.predict(grid, train_X)
326-element Vector{String}:
"b"
"b"
"b"
"b"
"a"
"b"
"b"
"b"
"a"
"a"
⋮
"a"
"b"
"b"
"a"
"a"
"a"
"a"
"b"
"b"
If we want more information about the relative performance of treatments for these points, we can predict the full treatment ranking with predict_treatment_rank
:
IAI.predict_treatment_rank(grid, train_X)
326×2 Matrix{String}:
"b" "a"
"b" "a"
"b" "a"
"b" "a"
"a" "b"
"b" "a"
"b" "a"
"b" "a"
"a" "b"
"a" "b"
⋮
"a" "b"
"b" "a"
"b" "a"
"a" "b"
"a" "b"
"a" "b"
"a" "b"
"b" "a"
"b" "a"
To quantify the difference in performance behind the treatment rankings, we can use predict_treatment_outcome
to extract the estimated quality of each treatment for each point:
IAI.predict_treatment_outcome(grid, train_X)
326×2 DataFrame
Row │ a b
│ Float64 Float64
─────┼────────────────────
1 │ 0.408147 0.74519
2 │ 0.408147 0.74519
3 │ 0.408147 0.74519
4 │ 0.408147 0.74519
5 │ 0.425737 0.261134
6 │ 0.408147 0.74519
7 │ 0.408147 0.74519
8 │ 0.408147 0.74519
⋮ │ ⋮ ⋮
320 │ 0.128606 0.432777
321 │ 0.425737 0.261134
322 │ 0.425737 0.261134
323 │ 0.425737 0.261134
324 │ 0.88628 0.693037
325 │ 0.128606 0.432777
326 │ 0.408147 0.74519
311 rows omitted
We can also extract the standard errors of the outcome estimates with predict_treatment_outcome_standard_error
:
IAI.predict_treatment_outcome_standard_error(grid, train_X)
326×2 DataFrame
Row │ a b
│ Float64 Float64
─────┼──────────────────────
1 │ 0.0786817 0.0452025
2 │ 0.0786817 0.0452025
3 │ 0.0786817 0.0452025
4 │ 0.0786817 0.0452025
5 │ 0.0734146 0.0445378
6 │ 0.0786817 0.0452025
7 │ 0.0786817 0.0452025
8 │ 0.0786817 0.0452025
⋮ │ ⋮ ⋮
320 │ 0.0916095 0.0804062
321 │ 0.0734146 0.0445378
322 │ 0.0734146 0.0445378
323 │ 0.0734146 0.0445378
324 │ 0.128199 0.11603
325 │ 0.0916095 0.0804062
326 │ 0.0786817 0.0452025
311 rows omitted
These standard errors can be combined with the outcome estimates to construct confidence intervals in the usual way.
Evaluating Optimal Policy Trees
It is critical for a fair evaluation that we do not evaluate the quality of the policy using rewards from our existing reward estimator trained on the training set. This is to avoid any information from the training set leaking through to the out-of-sample evaluation.
Instead, what we need to do is to estimate a new set of rewards using only the test set, and evaluate the policy against these rewards:
test_predictions, test_reward_score = IAI.fit_predict!(
reward_lnr, test_X, test_treatments, test_outcomes,
propensity_score_criterion=:auc, outcome_score_criterion=:auc)
test_rewards = test_predictions[:reward]
327×2 DataFrame
Row │ a b
│ Float64 Float64
─────┼─────────────────────────
1 │ 0.598 1.16828
2 │ 1.19496 0.706667
3 │ 0.73 1.17169
4 │ 0.82 1.08072
5 │ 0.74 1.06109
6 │ 0.62 1.19721
7 │ 0.82 1.0133
8 │ 0.7 1.30826
⋮ │ ⋮ ⋮
321 │ -0.0958252 0.38
322 │ 0.03 -0.00397411
323 │ 0.04 -0.0297232
324 │ 0.1375 -0.0114039
325 │ -0.907834 0.11
326 │ 0.1 -0.143617
327 │ 0.256667 -0.16448
312 rows omitted
test_reward_score
Dict{Symbol, Any} with 2 entries:
:propensity => 0.622643
:outcome => Dict("b"=>0.924067, "a"=>0.921969)
We see the scores are similar to those on the training set, giving us confidence that the estimated rewards are a fair reflection of reality, and will serve as a good basis for evaluation. The AUC for the propensity model is again on the low side, but should be compensated for by means of the doubly-robust estimator.
We can now evaluate the quality using these new estimated rewards. First, we will calculate the average predicted credit approval probability under the treatments prescribed by the tree for the test set. To do this, we use predict_outcomes
which uses the model to make prescriptions and looks up the predicted outcomes under these prescriptions:
policy_outcomes = IAI.predict_outcomes(grid, test_X, test_rewards)
327-element Vector{Float64}:
0.5979999999999999
0.7066666666666666
1.1716894684869135
1.0807235744074142
0.74
1.1972140249265495
1.0133023546488187
0.7
0.5662857142857143
0.635
⋮
-0.4322875752771873
-1.5045613910798346
0.3799999999999999
-0.003974108993193948
-0.029723226231603395
-0.01140394479909057
0.1100000000000001
0.10000000000000009
-0.1644800846257941
We can then get the average estimated approval probability under our treatments:
using Statistics
mean(policy_outcomes)
0.41119889912148
We can compare this number to the average approval probability under the treatment assignments that were actually observed:
mean([test_rewards[i, test_treatments[i]] for i in 1:length(test_treatments)])
0.4074034869644
We see this policy is about the same as the real treatments.