API Reference

Documentation for the IAIBase public interface.

Index

Data Preparation

IAI.FeatureInputType

Permissible types for specifying the feature data.

The features can be supplied as Matrix of Reals or a DataFrame as follows:

  • numeric features are specified using numeric vectors
  • categoric and ordinal features are specified using CategoricalVectors
  • mixed features are specified using vectors of MixedDatum (see make_mixed_data)
  • missing values are specified using missing

For more details, refer to the data preparation guide in the manual.

IAI.MixedDatumType
MixedDatum{T}

Represents a mixed feature value that can either be categoric or of type T.

The value has the following fields:

  • iscat::Bool: true if the value is categoric
  • value_cat: the value if categoric
  • value_else: the value if non-categoric

Mixed features are specified in the data using vectors of MixedDatum. It is recommended to create and work with these vectors of MixedDatum values via make_mixed_data and undo_mixed_data.

IAI.make_mixed_dataFunction
make_mixed_data(input)

Construct a vector of mixed categoric and numeric data from input. All numeric values from input are treated as numeric data, and all remaining values are treated as categoric data.

Examples

Construct a mixed data vector with a numeric score and two additional levels ("Unknown" and "NA")

IAI.make_mixed_data([13, "Unknown", "NA", 2, 4, missing])
6-element Array{MixedDatum{Float64},1}:
 13.0
 "Unknown"
 "NA"
 2.0
 4.0
 missing
make_mixed_data(input, ordinal_levels)

Construct a vector of mixed categoric and ordinal data from input. All values from input that are in ordinal_levels are treated as ordinal data with the ordering indicated by the order of ordinal_levels, and all remaining values are treated as categoric data.

Examples

Construct a mixed data vector with three ordered levels (A < B < C) and two additional levels ("Unknown" and "NA")

IAI.make_mixed_data(["B", "Unknown", "NA", "C", "A", missing], ["A", "B", "C"])
6-element Array{MixedDatum{CategoricalArrays.CategoricalValue{Any,UInt32}},1}:
 "B"
 "Unknown"
 "NA"
 "C"
 "A"
 missing
IAI.undo_mixed_dataFunction
undo_mixed_data(mixed_data)

Convert an vector of mixed data back to a normal Vector with mixed types.

Examples

Undo the conversion to numeric mixed data vector

numeric_mixed = IAI.make_mixed_data([13, "Unknown", "NA", 2, 4, missing])
IAI.undo_mixed_data(numeric_mixed)
6-element Array{Any,1}:
 13.0
   "Unknown"
   "NA"
  2.0
  4.0
   missing

Undo the conversion to ordinal mixed data vector

ordinal_mixed = IAI.make_mixed_data(["B", "Unknown", "NA", "C", "A", missing],
                                    ["A", "B", "C"])
IAI.undo_mixed_data(ordinal_mixed)
6-element Array{Union{Missing, String},1}:
 "B"
 "Unknown"
 "NA"
 "C"
 "A"
 missing
IAI.TargetInputType

Permissible types for specifying the problem target. The number and types of the target arguments depend on the problem type (for more information, refer to the data preparation guide in the manual):

Classification

  • y: AbstractVector of class labels

Regression

  • y: AbstractVector of numeric values

Prescription

  • treatments: AbstractVector of treatment labels
  • outcomes: AbstractVector of numeric outcomes

Survival

  • deaths: AbstractVector{Bool} indicating which observations are deaths
  • times: AbstractVector of times for each observation

Imputation

No target required

IAI.SampleWeightInputType

Permissible types for specifying sample weights:

  • nothing (default) will assign equal weight to all points
  • Vector or StatsBase.Weights of the weights for each point

Additionally for problems with discrete outcomes (classification/prescription):

  • Dict giving the weight for each label
  • :autobalance to use weights that give each label equal weight

For more information, refer to the data preparation guide in the manual.

IAI.RelativeParameterInputType

Permissible types for specifying parameters relative to the number of samples or features:

  • :all: allowed to use all
  • a non-negative Integer: the value to be used
  • a Real between 0 and 1: use this proportion of the total number
  • :sqrt: use the square root of the total number
  • :log2: use the base-2 logarithm of the total number
IAI.split_dataFunction
split_data(task::Symbol, X::FeatureInput, y::TargetInput...;
           keyword_arguments...)

Split the data (X and y) into a tuple of training and testing data: (X_train, y_train...), (X_test, y_test...).

The mechanism used to split the data is determined by task:

  • Stratified:
    • :classification gives a stratified split on the class labels
    • :prescription_minimize or :prescription_maximize gives a stratified split on the treatments
  • Non-stratified:
    • :regression and :survival gives randomly split data

Keyword Arguments

  • train_proportion=0.7: proportion of data in training set
  • shuffle=true: whether the returned data is shuffled. If shuffle=false, the split mechanism will not be stratified even if the task defaults to a stratified approach
  • seed=nothing: random seed for splitting, uses the global random state if nothing is specified

Examples

Classification:

X = [1 2; 3 4; 5 6; 7 8]
y = ["A", "B", "A", "B"]
(train_X, train_y), (test_X, test_y) =
    IAI.split_data(:classification, X, y, seed=1)

Regression:

X = [1 2; 3 4; 5 6; 7 8]
y = [0.1, 0.2, 0.3, 0.4]
(train_X, train_y), (test_X, test_y) =
    IAI.split_data(:regression, X, y, seed=1)

Survival:

X = [1 2; 3 4; 5 6; 7 8]
deaths = [true, false, true, false]
times = [1, 2, 3, 4]
(train_X, train_deaths, train_times), (test_X, test_deaths, test_times) =
    IAI.split_data(:survival, X, deaths, times, seed=1)

Prescription:

X = [1 2; 3 4; 5 6; 7 8]
treatments = ["A", "B", "A", "B"]
outcomes = [0.1, 0.2, 0.3, 0.4]
# or :prescription_maximize
(train_X, train_treatments, train_outcomes), (test_X, test_treatments, test_outcomes) =
    IAI.split_data(:prescription_minimize, X, treatments, outcomes, seed=1)

Learners

IAI.LearnerType

Abstract type encompassing all learners.

Learners are further divided into two groups:

  • SupervisedLearner for supervised tasks, containing:
    • ClassificationLearner
    • RegressionLearner
    • PrescriptionLearner
    • SurvivalLearner
    • RewardEstimationLearner
  • UnsupervisedLearner for unsupervised tasks, containing:
    • ImputationLearner

Fitting

IAI.fit!Method
fit!(lnr::Learner, X::FeatureInput, y::TargetInput...;
     sample_weight::SampleWeightInput=nothing)

Fits a model using the parameters in lnr and the data X and y.

Evaluation (for supervised learners only)

IAI.predictMethod
predict(lnr::SupervisedLearner, X::FeatureInput)

Return the predictions made by the trained model in lnr for each point in the data X.

IAI.scoreFunction
score(lnr::SupervisedLearner, X::FeatureInput, y::TargetInput...;
      keyword_arguments...)

Calculates the score for lnr on data X and y. All scores are calibrated such that higher is better (and 1 is the maximum possible score).

Keyword Arguments

  • sample_weight::SampleWeightInput=nothing: the weighting to give to each data point.
  • criterion=:default: the scoring criterion to use when evaluating the score (refer to the documentation on scoring criteria for more information). Uses the criterion in lnr if left as :default.
  • extra keyword arguments are passed through to configure the specified scoring criterion (e.g. tweedie_variance_power for :tweedie)

Utilities

IAI.write_jsonFunction
write_json(f, obj; keyword_arguments...)

Write obj (can be a Learner or GridSearch) to f in JSON format.

Keyword Arguments

  • indent=2: indent amount in JSON
IAI.read_jsonFunction
read_json(f)

Read in a Learner or GridSearch saved in JSON format from f.

IAI.variable_importanceMethod
variable_importance(lnr::Learner)

Generate a ranking of the variables in lnr according to their importance during training. The results are normalized so that they sum to one.

IAI.get_paramsFunction
get_params!(lnr::Learner)

Return a Dict containing the values of user-specified parameters in lnr.

IAI.set_params!Function
set_params!(lnr::Learner; params...)

Update user-specified parameters in lnr with all supplied key-value pairs in params.

IAI.cloneFunction
clone(lnr::Learner)

Return an unfitted copy of lnr with the same user-specified parameters.

Visualization in Browser

IAI.write_htmlMethod
write_html(f, vis::AbstractVisualization; keyword_arguments...)

Generic function for saving a visualization vis to f in HTML format.

IAI.show_in_browserMethod
show_in_browser(vis::AbstractVisualization; keyword_arguments...)

Generic function for showing a visualization vis in the browser.

Grid Search and Parameter Validation

IAI.GridSearchType
GridSearch(lnr::Learner, param_grid)

Controls grid search over parameter combinations in param_grid to find the best combination of parameters for lnr.

lnr is a learner with any parameters that should be included in all combinations of parameters tested.

param_grid contains the parameter ranges to search over. These can be supplied in multiple ways, which we demonstrate with examples that create identical GridSearchs to tune lnr over the parameters criterion and normalize_X:

  • one or more keyword arguments to the GridSearch constructor containing key=value pairs for all desired parameters and their ranges:

    IAI.GridSearch(lnr, criterion=[:gini, :entropy], normalize_X=[true, false])
  • a Dict or NamedTuple where the keys are the names of the parameters to tune, and the corresponding values are the range over which to vary each parameter:

    IAI.GridSearch(lnr, Dict(:criterion => [:gini, :entropy],
                             :normalize_X => [true, false]))
    IAI.GridSearch(lnr, (criterion=[:gini, :entropy], normalize_X=[true, false]))
  • a Vector{Dict} or Vector{NamedTuple} where each entry specifies a grid of parameters to test (refer to the documentation on multiple parameter grids):

    IAI.GridSearch(lnr, [
        (criterion=:gini,    normalize_X=[true, false]),
        (criterion=:entropy, normalize_X=[true, false]),
    ])
IAI.fit!Method
fit!(grid::GridSearch, X::FeatureInput, y::TargetInput...;
     keyword_arguments...)

Fit a grid with data X and y... by randomly splitting the data into training and validation sets in the same way as split_data.

Keyword Arguments

  • train_proportion::Float64=0.7: the proportion of data used in training
  • sample_weight::SampleWeightInput=nothing: the weighting to give to each data point.
  • verbose::Bool=false: if true, prints out the score for each parameter combination during the grid search.
  • validation_criterion::Symbol=:default: the scoring criterion that should be used to evaluate the parameter combinations to determine which is best (refer to the documentation on scoring criteria for more information). Uses the criterion in the lnr of contained in grid if left as :default.
  • extra keyword arguments are passed through to configure the specified scoring criterion (e.g. tweedie_variance_power for :tweedie)

fit!(grid::GridSearch, train_X::FeatureInput, train_y::TargetInput...,
     valid_X::FeatureInput, valid_y::TargetInput...; keyword_arguments...)

Fit a grid with explicit training and validation sets.

Supports the same keyword arguments as above with the exception of train_proportion as the data has already been split. sample_weight additionally accepts a Tuple of sample weight vectors if you would like to specify explicit weight vectors for the training and validation sets.

IAI.fit_cv!Function
fit_cv!(grid::GridSearch, X::FeatureInput, y::TargetInput...;
        keyword_arguments...)

Fit a grid with data X and y... using k-fold cross-validation.

The keyword arguments are the same as for fitting the grid with randomly split data using IAI.fit!, except the train_proportion argument is replaced by n_folds, which indicates the number of folds to use in the cross-validation (defaulting to 5).

IAI.get_learnerFunction
get_learner(grid::GridSearch)

Return the final fitted learner using the best parameter combination from the grid.

IAI.get_best_paramsFunction
get_best_params(grid::GridSearch)

Return the best parameter combination from the grid.

Examples

Example output from a GridSearch used to tune an OptimalTreeClassifier:

IAI.get_best_params(grid)
Dict{Symbol,Any} with 2 entries:
  :cp        => 0.0357143
  :max_depth => 3
IAI.get_grid_resultsFunction
get_grid_results(grid::GridSearch)

Return a DataFrame summarizing the results from the grid search.

Each row corresponds to a single parameter combination from the grid search, and contains:

  • the value of each parameter
  • the training and validation scores of the learner trained using these parameters
  • the rank of this parameter combination according to the validation score (where a rank of 1 indicates the best parameter combination)

When fitting the grid using cross-validation, the training and validation scores for each fold are shown, along with the mean and standard deviation of these scores.

Examples

Example output from a GridSearch used to tune an OptimalTreeClassifier:

IAI.get_grid_results(grid)
3×5 DataFrame
│ Row │ max_depth │ cp        │ train_score │ valid_score │ rank_valid_score │
│     │ Int64     │ Float64   │ Float64     │ Float64     │ Int64            │
├─────┼───────────┼───────────┼─────────────┼─────────────┼──────────────────┤
│ 1   │ 1         │ 0.25      │ 0.666667    │ 0.666667    │ 3                │
│ 2   │ 2         │ 0.228571  │ 0.971429    │ 0.911111    │ 2                │
│ 3   │ 3         │ 0.0357143 │ 0.980952    │ 0.915556    │ 1                │

Task-specific Functions

These functions are only available to learners of the appropriate type for the problem.

Classification

IAI.predict_probaFunction
predict_proba(lnr::ClassificationLearner, X::FeatureInput)

Return the probabilities of class membership made by the trained model in lnr for each point in the features X.

IAI.ROCCurveType

Container for ROC curve information with the following fields:

  • coords::Vector{Dict}: Vector of Dicts representing the points on the curve. Each Dict contains the following keys:
    • :fpr: false positive rate at the given threshold
    • :tpr: true positive rate at the given threshold
    • :threshold: the threshold
  • auc::Float64: the area-under-the-curve (AUC)

The resulting curve can be visualized in the browser using show_in_browser, or with write_html to save the visualization in HTML format.

IAI.ROCCurveMethod
ROCCurve(probs::AbstractVector{<:Real}, y::AbstractVector, positive_label)

Construct a ROCCurve from predicted probabilities probs and true labels y. It is required to specify one of the labels contained in y as the positive_label so that probs gives the predicted probability of being equal to positive_label for each sample.

Examples

Calculate AUC from predicted probabilities and true labels:

probs = [0.1, 0.8, 0.4, 0.3]
y = ["A", "B", "A", "B"]
ROCCurve(probs, y, positive_label="B").auc
0.75
IAI.ROCCurveMethod
ROCCurve(lnr::ClassificationLearner, X::FeatureInput, y::AbstractVector)

Construct a ROCCurve using trained lnr on the features X and labels y.

Info

Can only be applied to classification problems with $K=2$ classes.

IAI.write_htmlMethod
write_html(f, roc::ROCCurve)

Write interactive browser visualization of roc to f in HTML format.

IAI.show_in_browserMethod
show_in_browser(roc::ROCCurve)

Display an interactive visualization of roc in the browser.

Prescription

IAI.predict_outcomesMethod
predict_outcomes(lnr::PrescriptionLearner, X::FeatureInput)

Return a DataFrame containing the predicted outcome for each treatment option made by the trained model in lnr for each point in the features X.

Policy

IAI.predict_outcomesMethod
predict_outcomes(lnr::PolicyLearner, X::FeatureInput, rewards::FeatureInput)

Return the outcome from rewards for each point in the features X under the prescriptions made by the trained model in lnr.

Survival

IAI.SurvivalCurveType

Container for survival curve information.

Use curve[t] to get the mortality probability prediction from curve at time t. This returns the cumulative distribution function evaluated at time t, i.e., the probability that death occurs at or before time t.

The data underlying the curve can be extracted with get_survival_curve_data.

IAI.get_survival_curve_dataFunction
get_survival_curve_data(curve::SurvivalCurve)

Extract the underlying data from curve as a Dict with two keys:

  • :times: the time for each breakpoint on the curve
  • :coefs: the mortality probablility for each breakpoint on the curve
IAI.predict_hazardFunction
predict_hazard(lnr::SurvivalLearner, X::FeatureInput)

Return the fitted hazard coefficient estimate made by the trained model in lnr for each point in the data X. A higher hazard coefficient estimate corresponds to a smaller predicted survival time.

IAI.predict_expected_survival_timeFunction
predict_expected_survival_time(lnr::SurvivalLearner, X::FeatureInput)

Return the expected time to survival made by the trained model in lnr for each point in the data X.

Imputation

IAI.transformFunction
transform(lnr::ImputationLearner, X::FeatureInput)

Return a DataFrame containing the features X with all missing values imputed by the fitted imputation model in lnr.

IAI.fit_transform!Function
fit_transform!(lnr::ImputationLearner, X::FeatureInput; kwargs...)

Fit lnr with an imputation model on features X and return a DataFrame containing the features X with all missing values imputed by lnr. Equivalent to calling fit!(lnr, X; kwargs...) followed by transform(lnr, X).


fit_transform!(grid::GridSearch, X::FeatureInput; kwargs...)

As fit_transform! for an imputation learner, but performs validation over the grid parameters during training before returning the final imputed DataFrame.


fit_transform!(grid::GridSearch, train_X::FeatureInput,
               valid_X::FeatureInput; kwargs...)

As fit_transform! but performs validation with the pre-split training and validation sets train_X and valid_X.

Reward Estimation

IAI.predictMethod
predict(lnr::RewardEstimationLearner, X::FeatureInput,
        treatments::AbstractVector, outcomes::AbstractVector)

Return counterfactual rewards estimated by lnr for each observation in the data given by X, treatments and outcomes.

IAI.fit_predict!Function
fit_predict!(lnr::RewardEstimationLearner, X::FeatureInput,
             treatments::AbstractVector, outcomes::AbstractVector;
             kwargs...)

Fit lnr with a reward estimation model on features X, treatments treatments, and outcomes outcomes, and return predicted counterfactual rewards for each observation.