Found 1856 packages in 0.01 seconds
Extensible, Parallelizable Implementation of the Random Forest Algorithm
Scalable implementation of classification and regression forests, as described by Breiman (2001),
Autoencoding Random Forests
Autoencoding Random Forests ('RFAE') provide a method to
autoencode mixed-type tabular data using Random Forests ('RF'), which
involves projecting the data to a latent feature space of user-chosen
dimensionality (usually a lower dimension), and then decoding the latent
representations back into the input space. The encoding stage is useful for
feature engineering and data visualisation tasks, akin to how principal
component analysis ('PCA') is used, and the decoding stage is useful
for compression and denoising tasks. At its core, 'RFAE' is a
post-processing pipeline on a trained random forest model. This means
that it can accept any trained RF of 'ranger' object type: 'RF', 'URF' or
'ARF'. Because of this, it inherits Random Forests' robust performance and
capacity to seamlessly handle mixed-type tabular data. For more details, see
Vu et al. (2025)
Distributional Random Forests
An implementation of distributional random forests as introduced in Cevid & Michel & Naf & Meinshausen & Buhlmann (2022)
Ordered Random Forests
An implementation of the Ordered Forest estimator as developed
in Lechner & Okasa (2019)
Generalized Random Forests
Forest-based statistical estimation and inference. GRF provides non-parametric methods for heterogeneous treatment effects estimation (optionally using right-censored outcomes, multiple treatment arms or outcomes, or instrumental variables), as well as least-squares regression, quantile regression, and survival regression, all with support for missing covariates.
Modified Ordered Random Forest
Nonparametric estimator of the ordered choice model using random forests. The estimator modifies a standard random forest splitting criterion to build a collection of forests, each estimating the conditional probability of a single class. The package also implements a nonparametric estimator of the covariates’ marginal effects.
Random Forests for Longitudinal Data
Random forests are a statistical learning method widely used in many areas of scientific research essentially for its ability to learn complex relationships between input and output variables and also its capacity to handle high-dimensional data. However, current random forests approaches are not flexible enough to handle longitudinal data. In this package, we propose a general approach of random forests for high-dimensional longitudinal data. It includes a flexible stochastic model which allows the covariance structure to vary over time. Furthermore, we introduce a new method which takes intra-individual covariance into consideration to build random forests. The method is fully detailled in Capitaine et.al. (2020)
Visually Exploring Random Forests
Graphic elements for exploring Random Forests using the 'randomForest' or 'randomForestSRC' package for survival, regression and classification forests and 'ggplot2' package plotting.
Permutation Significance for Random Forests
Estimate False Discovery Rates (FDRs) for importance metrics from random forest runs.
Covariance Regression with Random Forests
Covariance Regression with Random Forests (CovRegRF) is a
random forest method for estimating the covariance matrix of a
multivariate response given a set of covariates. Random forest trees
are built with a new splitting rule which is designed to maximize the
distance between the sample covariance matrix estimates of the child
nodes. The method is described in Alakus et al. (2023)