cv_scrnp.Rd
This function computes K-fold cross-validated estimates of estimates of cross-validated sensitivity-constrained rate of negative prediction (SCRNP). This quantity can be interpreted as the rate of negative classification for a fixed constraint on the sensitivity of a prediction algorithm. Thus, if an algorithm has a high SCRNP, it will also have a high positive predictive value.
cv_scrnp( Y, X, K = 10, sens = 0.95, learner = "glm_wrapper", nested_cv = TRUE, nested_K = K - 1, parallel = FALSE, max_cvtmle_iter = 10, cvtmle_ictol = 1/length(Y), quantile_type = 8, prediction_list = NULL, ... )
Y | A numeric vector of outcomes, assume to equal |
---|---|
X | A |
K | The number of cross-validation folds (default is |
sens | The sensitivity constraint imposed on the rate of negative prediction (see description). |
learner | A wrapper that implements the desired method for building a prediction algorithm. |
nested_cv | A boolean indicating whether nested cross validation should
be used to estimate the distribution of the prediction function. Default ( |
nested_K | If nested cross validation is used, how many inner folds should
there be? Default ( |
parallel | A boolean indicating whether prediction algorithms should be
trained in parallel. Default to |
max_cvtmle_iter | Maximum number of iterations for the bias correction
step of the CV-TMLE estimator (default |
cvtmle_ictol | The CV-TMLE will iterate |
quantile_type | Type of quantile estimator to be used. See quantile for description. |
prediction_list | For power users: a list of predictions made by |
... | Other arguments, not currently used |
An object of class "scrnp"
.
est_cvtmle
cross-validated targeted minimum loss-based estimator of K-fold CV AUC
iter_cvtmle
iterations needed to achieve convergence of CVTMLE algorithm
cvtmle_trace
the value of the CVTMLE at each iteration of the targeting algorithm
se_cvtmle
estimated standard error based on targeted nuisance parameters
est_init
plug-in estimate of CV AUC where nuisance parameters are estimated in the training sample
est_empirical
the standard K-fold CV AUC estimator
se_empirical
estimated standard error for the standard estimator
est_onestep
cross-validated one-step estimate of K-fold CV AUC
se_onestep
estimated standard error for the one-step estimator
est_esteq
cross-validated estimating equations estimate of K-fold CV AUC (here, equivalent to one-step, since the estimating equation is linear in SCRNP)
se_esteq
estimated standard error for the estimating equations estimator (same as one-step)
folds
list of observation indexes in each validation fold
ic_cvtmle
influence function evaluated at the targeted nuisance parameter estimates
ic_onestep
influence function evaluated at the training-fold-estimated nuisance parameters
ic_esteq
influence function evaluated at the training-fold-estimated nuisance parameters
ic_empirical
influence function evaluated at the validation-fold estimated nuisance parameters
prediction_list
a list of output from the cross-validated model training; see the individual wrapper function documentation for further details
To estimate the SCRNP using K-fold cross-validation is problematic. If
data are partitioned into K distinct groups, depending on the sample size
and choice of K, the validation sample may be quite small. In order to estimate
SCRNP, we require estimation of a quantile of the predictor's distribution. More extreme
quantiles (which correspond to high sensitivity constraints) are difficult to estimate
using few observations. Here, we estimate relevant nuisance parameters in the training sample and use
the validation sample to perform some form of bias correction -- either through
cross-validated targeted minimum loss-based estimation, estimating equations,
or one-step estimation. When aggressive learning algorithms are applied, it is
necessary to use an additional layer of cross-validation in the training sample
to estimate the nuisance parameters. This is controlled via the nested_cv
option below.
# simulate data n <- 200 p <- 10 X <- data.frame(matrix(rnorm(n*p), nrow = n, ncol = p)) Y <- rbinom(n, 1, plogis(X[,1] + X[,10])) # estimate cv scrnp of logistic regression scrnp_ests <- cv_scrnp(Y = Y, X = X, K = 5, nested_cv = FALSE, learner = "glm_wrapper") # estimate cv scrnp of random forest with nested # cross-validation for nuisance parameter estimationscrnp_ests <- cv_scrnp(Y = Y, X = X, K = 5, nested_cv = TRUE, learner = "randomforest_wrapper")