Found 1823 packages in 0.05 seconds
An Implementation of the Hedged Random Forest Algorithm
This algorithm is described in detail in the paper "Hedging Forecast Combinations With an Application to the Random Forest" by Beck et al. (2024) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5032102>. The package provides a function hedgedrf() that can be used to train a Hedged Random Forest model on a dataset, and a function predict.hedgedrf() that can be used to make predictions with the model.
Random Forest Two-Sample Tests
An implementation of Random Forest-based two-sample tests as introduced in Hediger & Michel & Naef (2022).
Multivariate Random Forest with Compositional Responses
Multivariate random forests with compositional responses and Euclidean predictors is performed. The compositional data are first transformed using the additive log-ratio transformation, or the alpha-transformation of Tsagris, Preston and Wood (2011),
Find the Outlier by Quantile Random Forests
Provides a method to find the outlier in custom data by quantile random forests method. Introduced by Meinshausen Nicolai (2006) < https://dl.acm.org/doi/10.5555/1248547.1248582>. It directly calls the ranger() function of the 'ranger' package to perform data fitting and prediction. We also implement the evaluation of outlier prediction results. Compared with random forest detection of outliers, this method has higher accuracy and stability on large datasets.
Intervention in Prediction Measure for Random Forests
Computes intervention in prediction measure for assessing variable importance for random forests. See details at I. Epifanio (2017)
Random Forest with Canonical Correlation Analysis
Random Forest with Canonical Correlation Analysis (RFCCA) is a
random forest method for estimating the canonical correlations between two
sets of variables depending on the subject-related covariates. The trees are
built with a splitting rule specifically designed to partition the data to
maximize the canonical correlation heterogeneity between child nodes. The
method is described in Alakus et al. (2021)
Easy Spatial Modeling with Random Forest
Automatic generation and selection of spatial predictors for Random Forest models fitted to spatially structured data. Spatial predictors are constructed from a distance matrix among training samples using Moran's Eigenvector Maps (MEMs; Dray, Legendre, and Peres-Neto 2006
Variable Selection using Random Forests
Variable selection from random forests using both backwards variable elimination (for the selection of small sets of non-redundant variables) and selection based on the importance spectrum (somewhat similar to scree plots; for the selection of large, potentially highly-correlated variables). Main applications in high-dimensional data (e.g., microarray data, and other genomics and proteomics applications).
Unbiased Variable Importance for Random Forests
Computes a novel variable importance for random forests: Impurity reduction importance scores for out-of-bag (OOB) data complementing the existing inbag Gini importance, see also
Prediction Intervals with Random Forests and Boosted Forests
Implements various prediction interval methods with random forests and boosted forests.
The package has two main functions: pibf() produces prediction intervals with boosted forests
(PIBF) as described in Alakus et al. (2022)