Wrapper around multiple PipeOps to help in creation of complex survival reduction methods. Three reductions are currently implemented, see details.
Usage
pipeline_survtoregr(
method = 1,
regr_learner = lrn("regr.featureless"),
distrcompose = TRUE,
distr_estimator = lrn("surv.kaplan"),
regr_se_learner = NULL,
surv_learner = lrn("surv.coxph"),
survregr_params = list(method = "ipcw", estimator = "kaplan", alpha = 1),
distrcompose_params = list(form = "aft"),
probregr_params = list(dist = "Uniform"),
learnercv_params = list(resampling.method = "insample"),
graph_learner = FALSE
)
Arguments
- method
(
integer(1)
)
Reduction method to use, corresponds to those indetails
. Default is1
.- regr_learner
LearnerRegr
Regression learner to fit to the transformed TaskRegr. Ifregr_se_learner
isNULL
in method2
, thenregr_learner
must havese
predict_type.- distrcompose
(
logical(1)
)
For method3
ifTRUE
(default) then PipeOpDistrCompositor is utilised to transform the deterministic predictions to a survival distribution.- distr_estimator
LearnerSurv
For methods1
and3
ifdistrcompose = TRUE
then specifies the learner to estimate the baseline hazard, must have predict_typedistr
.- regr_se_learner
LearnerRegr
For method2
ifregr_learner
is not used to predict these
then aLearnerRegr
withse
predict_type must be provided.- surv_learner
LearnerSurv
For method3
, a LearnerSurv withlp
predict type to estimate linear predictors.- survregr_params
(
list()
)
Parameters passed to PipeOpTaskSurvRegr, default are survival to regression transformation viaipcw
, with weighting determined by Kaplan-Meier and no additional penalty for censoring.- distrcompose_params
(
list()
)
Parameters passed to PipeOpDistrCompositor, default is accelerated failure time model form.- probregr_params
(
list()
)
Parameters passed to PipeOpProbregr, default is Uniform distribution for composition.- learnercv_params
(
list()
)
Parameters passed to PipeOpLearnerCV, default is to use insampling.- graph_learner
(
logical(1)
)
IfTRUE
returns wraps the Graph as a GraphLearner otherwise (default) returns as aGraph
.
Details
Three reduction strategies are implemented, these are:
Survival to Deterministic Regression A
PipeOpTaskSurvRegr Converts TaskSurv to TaskRegr.
A LearnerRegr is fit and predicted on the new
TaskRegr
.PipeOpPredRegrSurv transforms the resulting PredictionRegr to PredictionSurv.
Survival to Probabilistic Regression
PipeOpTaskSurvRegr Converts TaskSurv to TaskRegr.
A LearnerRegr is fit on the new
TaskRegr
to predictresponse
, optionally a secondLearnerRegr
can be fit to predictse
.PipeOpProbregr composes a
distr
prediction from the learner(s).PipeOpPredRegrSurv transforms the resulting PredictionRegr to PredictionSurv.
Survival to Deterministic Regression B
PipeOpLearnerCV cross-validates and makes predictions from a linear LearnerSurv with
lp
predict type on the original TaskSurv.PipeOpTaskSurvRegr transforms the
lp
predictions into the target of a TaskRegr with the same features as the original TaskSurv.A LearnerRegr is fit and predicted on the new
TaskRegr
.PipeOpPredRegrSurv transforms the resulting PredictionRegr to PredictionSurv.
Optionally: PipeOpDistrCompositor is used to compose a
distr
predict_type from the predictedlp
predict_type.
Interpretation:
Once a dataset has censoring removed (by a given method) then a regression learner can predict the survival time as the
response
.This is a very similar reduction to the first method with the main difference being the distribution composition. In the first case this is composed in a survival framework by assuming a linear model form and baseline hazard estimator, in the second case the composition is in a regression framework. The latter case could result in problematic negative predictions and should therefore be interpreted with caution, however a wider choice of distributions makes it a more flexible composition.
This is a rarer use-case that bypasses censoring not be removing it but instead by first predicting the linear predictor from a survival model and fitting a regression model on these predictions. The resulting regression predictions can then be viewed as the linear predictors of the new data, which can ultimately be composed to a distribution.
Examples
if (FALSE) { # \dontrun{
library(mlr3)
library(mlr3pipelines)
task = tsk("rats")
# method 1 with censoring deletion, compose to distribution
pipe = ppl(
"survtoregr",
method = 1,
regr_learner = lrn("regr.featureless"),
survregr_params = list(method = "delete")
)
pipe$train(task)
pipe$predict(task)
# method 2 with censoring imputation (mrl), one regr learner
pipe = ppl(
"survtoregr",
method = 2,
regr_learner = lrn("regr.featureless", predict_type = "se"),
survregr_params = list(method = "mrl")
)
pipe$train(task)
pipe$predict(task)
# method 3 with censoring omission and no composition, insample resampling
pipe = ppl(
"survtoregr",
method = 3,
regr_learner = lrn("regr.featureless"),
distrcompose = FALSE,
surv_learner = lrn("surv.coxph"),
survregr_params = list(method = "omission")
)
pipe$train(task)
pipe$predict(task)
} # }