# Quick Start Guide: Multi-task Optimal Regression Trees

In this example we will use Optimal Regression Trees (ORT) on the concrete slump test dataset to solve a multi-task regression problem.

This guide assumes you are familiar with ORTs and focuses on aspects that are unique to the multi-task setting. For a general introduction to ORTs, please refer to the ORT quickstart guide.

First we load in the data and split into training and test datasets:

using CSV, DataFrames
df = CSV.read("slump_test.data", DataFrame)
103×11 DataFrame
Row │ No     Cement   Slag     Fly ash  Water    SP       Coarse Aggr.  Fine  ⋯
│ Int64  Float64  Float64  Float64  Float64  Float64  Float64       Float ⋯
─────┼──────────────────────────────────────────────────────────────────────────
1 │     1    273.0     82.0    105.0    210.0      9.0         904.0        ⋯
2 │     2    163.0    149.0    191.0    180.0     12.0         843.0
3 │     3    162.0    148.0    191.0    179.0     16.0         840.0
4 │     4    162.0    148.0    190.0    179.0     19.0         838.0
5 │     5    154.0    112.0    144.0    220.0     10.0         923.0        ⋯
6 │     6    147.0     89.0    115.0    202.0      9.0         860.0
7 │     7    152.0    139.0    178.0    168.0     18.0         944.0
8 │     8    145.0      0.0    227.0    240.0      6.0         750.0
⋮  │   ⋮       ⋮        ⋮        ⋮        ⋮        ⋮          ⋮            ⋮ ⋱
97 │    97    215.6    112.9    239.0    198.7      7.4         884.0        ⋯
98 │    98    295.3      0.0    239.9    236.2      8.3         780.3
99 │    99    248.3    101.0    239.1    168.9      7.7         954.2
100 │   100    248.0    101.0    239.9    169.1      7.7         949.9
101 │   101    258.8     88.0    239.6    175.3      7.6         938.9        ⋯
102 │   102    297.1     40.9    239.9    194.0      7.5         908.9
103 │   103    348.7      0.1    223.1    208.5      9.6         786.2
4 columns and 88 rows omitted

The goal is to predict three characteristics of concrete from other properties. We therefore separate these targets from the rest of the features, and split for training and testing:

rename!(df,
"SLUMP(cm)" => "Slump",
"FLOW(cm)" => "Flow",
"Compressive Strength (28-day)(Mpa)" => "Strength",
)
targets = [:Slump, :Flow, :Strength]
X = select(df, Not([:No; targets]))
y = select(df, targets)
(train_X, train_y), (test_X, test_y) = IAI.split_data(:multi_regression,
X, y, seed=1)

We will use a GridSearch to fit an OptimalTreeMultiRegressor:

grid = IAI.GridSearch(
IAI.OptimalTreeMultiRegressor(
random_seed=1,
),
max_depth=1:5,
)
IAI.fit!(grid, train_X, train_y)
IAI.get_learner(grid)
Optimal Trees Visualization

We can make predictions on new data using predict:

IAI.predict(grid, test_X)
OrderedCollections.OrderedDict{Symbol, Vector{Float64}} with 3 entries:
:Slump    => [8.7, 20.5517, 20.5517, 20.5517, 20.5517, 20.5517, 20.5517, 20.5…
:Flow     => [27.45, 54.4, 54.4, 54.4, 54.4, 54.4, 54.4, 54.4, 54.4, 27.45  ……
:Strength => [30.087, 35.0178, 35.0178, 35.0178, 35.0178, 35.0178, 35.0178, 3…

This returns a dictionary containing the predictions for each of the tasks, and can also be converted to a dataframe easily:

DataFrame(IAI.predict(grid, test_X))
31×3 DataFrame
Row │ Slump    Flow     Strength
│ Float64  Float64  Float64
─────┼────────────────────────────
1 │  8.7       27.45   30.087
2 │ 20.5517    54.4    35.0178
3 │ 20.5517    54.4    35.0178
4 │ 20.5517    54.4    35.0178
5 │ 20.5517    54.4    35.0178
6 │ 20.5517    54.4    35.0178
7 │ 20.5517    54.4    35.0178
8 │ 20.5517    54.4    35.0178
⋮  │    ⋮        ⋮        ⋮
25 │ 20.5517    54.4    35.0178
26 │ 20.5517    54.4    35.0178
27 │  0.5       20.0    45.6875
28 │ 20.5517    54.4    35.0178
29 │  0.5       20.0    45.6875
30 │ 20.5517    54.4    35.0178
31 │ 20.5517    54.4    35.0178
16 rows omitted

We can also generate the predictions for a specific task by passing the task label:

IAI.predict(grid, test_X, :Slump)
31-element Vector{Float64}:
8.7
20.551724137931036
20.551724137931036
20.551724137931036
20.551724137931036
20.551724137931036
20.551724137931036
20.551724137931036
20.551724137931036
8.7
⋮
20.551724137931036
0.5
20.551724137931036
20.551724137931036
0.5
20.551724137931036
0.5
20.551724137931036
20.551724137931036

We can evaluate the quality of the tree using score with any of the supported loss functions. For multi-task problems, the returned score is the average of the scores of the individual tasks:

IAI.score(grid, test_X, test_y)
-0.1516528561254562

We can also calculate the score of a single task by specifying this task:

IAI.score(grid, test_X, test_y, :Flow)
-0.163777031780044

### Extensions

The standard ORT extensions (e.g. hyperplane splits, linear regression) are also available in the multi-task setting and controlled in the usual way.

For instance, we can use Optimal Regression Trees with hyperplane splits:

grid = IAI.GridSearch(
IAI.OptimalTreeMultiRegressor(
random_seed=1,
max_depth=2,
hyperplane_config=(sparsity=:all,)
),
)
IAI.fit!(grid, train_X, train_y)
IAI.get_learner(grid)
Optimal Trees Visualization