Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 1871 packages in 0.08 seconds

tuneRanger — by Philipp Probst, 7 months ago

Tune Random Forest of the 'ranger' Package

Tuning random forest with one line. The package is mainly based on the packages 'ranger' and 'mlrMBO'.

hedgedrf — by Elliot Beck, a year ago

An Implementation of the Hedged Random Forest Algorithm

This algorithm is described in detail in the paper "Hedging Forecast Combinations With an Application to the Random Forest" by Beck et al. (2024) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5032102>. The package provides a function hedgedrf() that can be used to train a Hedged Random Forest model on a dataset, and a function predict.hedgedrf() that can be used to make predictions with the model.

hypoRF — by Simon Hediger, 2 years ago

Random Forest Two-Sample Tests

An implementation of Random Forest-based two-sample tests as introduced in Hediger & Michel & Naef (2022).

CompositionalRF — by Michail Tsagris, 25 days ago

Multivariate Random Forest with Compositional Responses

Multivariate random forests with compositional responses and Euclidean predictors is performed. The compositional data are first transformed using the additive log-ratio transformation, or the alpha-transformation of Tsagris, Preston and Wood (2011), , and then the multivariate random forest of Rahman R., Otridge J. and Pal R. (2017), , is applied.

outqrf — by Tengfei Xu, 2 years ago

Find the Outlier by Quantile Random Forests

Provides a method to find the outlier in custom data by quantile random forests method. Introduced by Meinshausen Nicolai (2006) < https://dl.acm.org/doi/10.5555/1248547.1248582>. It directly calls the ranger() function of the 'ranger' package to perform data fitting and prediction. We also implement the evaluation of outlier prediction results. Compared with random forest detection of outliers, this method has higher accuracy and stability on large datasets.

IPMRF — by Irene Epifanio, 6 months ago

Intervention in Prediction Measure for Random Forests

Computes intervention in prediction measure for assessing variable importance for random forests. See details at I. Epifanio (2017) .

RFCCA — by Cansu Alakus, 2 years ago

Random Forest with Canonical Correlation Analysis

Random Forest with Canonical Correlation Analysis (RFCCA) is a random forest method for estimating the canonical correlations between two sets of variables depending on the subject-related covariates. The trees are built with a splitting rule specifically designed to partition the data to maximize the canonical correlation heterogeneity between child nodes. The method is described in Alakus et al. (2021) . 'RFCCA' uses 'randomForestSRC' package (Ishwaran and Kogalur, 2020) by freezing at the version 2.9.3. The custom splitting rule feature is utilised to apply the proposed splitting rule. The 'randomForestSRC' package implements 'OpenMP' by default, contingent upon the support provided by the target architecture and operating system. In this package, 'LAPACK' and 'BLAS' libraries are used for matrix decompositions.

spatialRF — by Blas M. Benito, 3 months ago

Easy Spatial Modeling with Random Forest

Automatic generation and selection of spatial predictors for Random Forest models fitted to spatially structured data. Spatial predictors are constructed from a distance matrix among training samples using Moran's Eigenvector Maps (MEMs; Dray, Legendre, and Peres-Neto 2006 ) or the RFsp approach (Hengl et al. ). These predictors are used alongside user-supplied explanatory variables in Random Forest models. The package provides functions for model fitting, multicollinearity reduction, interaction identification, hyperparameter tuning, evaluation via spatial cross-validation, and result visualization using partial dependence and interaction plots. Model fitting relies on the 'ranger' package (Wright and Ziegler 2017 ).

fru — by Miron Bartosz Kursa, 3 days ago

A Blazing Fast Implementation of Random Forest

Yet another implementation of the Random Forest method by Breiman (2001) , written in Rust and tailored towards stability, correctness, efficiency and scalability on modern multi-core machines. Handles both classification and regression, as well as provides permutation feature importance via a novel, highly optimised algorithm.

varSelRF — by Ramon Diaz-Uriarte, 2 months ago

Variable Selection using Random Forests

Variable selection from random forests using both backwards variable elimination (for the selection of small sets of non-redundant variables) and selection based on the importance spectrum (somewhat similar to scree plots; for the selection of large, potentially highly-correlated variables). Main applications in high-dimensional data (e.g., microarray data, and other genomics and proteomics applications).