R/LearnerSurvBlackboost.R
LearnerSurvBlackboost.Rd
Calls mboost::blackboost()
.
lp is predicted by mboost::predict.mboost()
distr is predicted by mboost::survFit()
which assumes a PH fit with a Breslow estimator
crank
is identical to lp
The dist
parameter is specified slightly differently than in mboost. Whereas the latter
takes in objects, in this learner instead a string is specified in order to identify which distribution
to use. As the default in mboost is the Gaussian family, which is not compatible with
survival models, instead we have by default "coxph"
.
If the value given to the Family
parameter is "custom.family" then an object of class
mboost::Family()
needs to be passed to the custom.family
parameter.
R6::R6Class()
inheriting from LearnerSurv.
LearnerSurvBlackboost$new() mlr_learners$get("surv.blackboost") lrn("surv.blackboost")
Type: "surv"
Predict Types: distr, crank, lp
Feature Types: integer, numeric, factor
Peter Buehlmann and Torsten Hothorn (2007), Boosting algorithms: regularization, prediction and model fitting. Statistical Science, 22(4), 477–505.
Torsten Hothorn, Kurt Hornik and Achim Zeileis (2006). Unbiased recursive partitioning: A conditional inference framework. Journal of Computational and Graphical Statistics, 15(3), 651–674.
Yoav Freund and Robert E. Schapire (1996), Experiments with a new boosting algorithm. In Machine Learning: Proc. Thirteenth International Conference, 148–156.
Jerome H. Friedman (2001), Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29, 1189–1232.
Greg Ridgeway (1999), The state of boosting. Computing Science and Statistics, 31, 172–181.
Other survival learners:
LearnerSurvCVGlmnet
,
LearnerSurvCoxPH
,
LearnerSurvFlexible
,
LearnerSurvGBM
,
LearnerSurvGamboost
,
LearnerSurvGlmboost
,
LearnerSurvGlmnet
,
LearnerSurvKaplan
,
LearnerSurvMboost
,
LearnerSurvNelson
,
LearnerSurvParametric
,
LearnerSurvPenalized
,
LearnerSurvRandomForestSRC
,
LearnerSurvRanger
,
LearnerSurvRpart
,
LearnerSurvSVM
library(mlr3) task = tgen("simsurv")$generate(200) learner = lrn("surv.blackboost") resampling = rsmp("cv", folds = 3) resample(task, learner, resampling)#> <ResampleResult> of 3 iterations #> * Task: simsurv #> * Learner: surv.blackboost #> * Warnings: 0 in 0 iterations #> * Errors: 0 in 0 iterations