Compatible learner wrappers for this package should have a specific format. Namely they should take as input a list called train that contains named objects $Y and $X, that contain, respectively, the outcomes and predictors in a particular training fold. Other options may be passed in to the function as well. The function must output a list with the following named objects: test_pred = predictions of test$Y based on the learner fit using train$X; train_pred = prediction of train$Y based on the learner fit using train$X; model = the fitted model (only necessary if you desire to look at this model later, not used for internal computations); train_y = a copy of train$Y; test_y = a copy of test$Y.

xgboost_wrapper(
  test,
  train,
  ntrees = 500,
  max_depth = 4,
  shrinkage = 0.1,
  minobspernode = 2,
  params = list(),
  nthread = 1,
  verbose = 0,
  save_period = NULL
)

Arguments

test

A list with named objects Y and X (see description).

train

A list with named objects Y and X (see description).

ntrees

See xgboost

max_depth

See xgboost

shrinkage

See xgboost

minobspernode

See xgboost

params

See xgboost

nthread

See xgboost

verbose

See xgboost

save_period

See xgboost

Value

A list with named objects (see description).

Details

This particular wrapper implements eXtreme gradient boosting using xgboost. We refer readers to the original package's documentation for more details.

Examples

# simulate data # make list of training data train_X <- data.frame(x1 = runif(50)) train_Y <- rbinom(50, 1, plogis(train_X$x1)) train <- list(Y = train_Y, X = train_X) # make list of test data test_X <- data.frame(x1 = runif(50)) test_Y <- rbinom(50, 1, plogis(train_X$x1)) test <- list(Y = test_Y, X = test_X) # fit xgboost xgb_wrap <- xgboost_wrapper(train = train, test = test)