Found 57 packages in 0.02 seconds
Interaction Statistics
Fast, model-agnostic implementation of different H-statistics
introduced by Jerome H. Friedman and Bogdan E. Popescu (2008)
Explaining and Visualizing Random Forests in Terms of Variable Importance
A set of tools to help explain which variables are most important in a random forests. Various variable importance measures are calculated and visualized in different settings in order to get an idea on how their importance changes depending on our criteria (Hemant Ishwaran and Udaya B. Kogalur and Eiran Z. Gorodeski and Andy J. Minn and Michael S. Lauer (2010)
Wrapper of Python Library 'shap'
Provides SHAP explanations of machine learning models. In applied machine learning, there is a strong belief that we need to strike a balance between interpretability and accuracy. However, in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP) introduced by Lundberg, S., et al., (2016)
Derivatives of the First-Passage Time Density and Cumulative Distribution Function, and Random Sampling from the (Truncated) First-Passage Time Distribution
First, we provide functions to calculate the partial derivative of the first-passage time diffusion probability density function (PDF) and cumulative
distribution function (CDF) with respect to the first-passage time t (only for PDF), the upper barrier a, the drift rate v, the relative starting point w, the
non-decision time t0, the inter-trial variability of the drift rate sv, the inter-trial variability of the rel. starting point sw, and the inter-trial variability
of the non-decision time st0. In addition the PDF and CDF themselves are also provided. Most calculations are done on the logarithmic scale to make it more stable.
Since the PDF, CDF, and their derivatives are represented as infinite series, we give the user the option to control the approximation errors with the argument
'precision'. For the numerical integration we used the C library cubature by Johnson, S. G. (2005-2013) < https://github.com/stevengj/cubature>. Numerical integration is
required whenever sv, sw, and/or st0 is not zero. Note that numerical integration reduces speed of the computation and the precision cannot be guaranteed
anymore. Therefore, whenever numerical integration is used an estimate of the approximation error is provided in the output list.
Note: The large number of contributors (ctb) is due to copying a lot of C/C++ code chunks from the GNU Scientific Library (GSL).
Second, we provide methods to sample from the first-passage time distribution with or without user-defined truncation from above. The first method is a new adaptive
rejection sampler building on the works of Gilks and Wild (1992;
Surrogate-Assisted Feature Extraction
Provides a model agnostic tool for white-box model trained on features extracted from a black-box model. For more information see: Gosiewska et al. (2020)
Interactive Studio for Explanatory Model Analysis
Automate the explanatory analysis of machine learning predictive
models. Generate advanced interactive model explanations in the form of
a serverless HTML site with only one line of code. This tool is
model-agnostic, therefore compatible with most of the black-box predictive
models and frameworks. The main function computes various (instance and
model-level) explanations and produces a customisable dashboard, which
consists of multiple panels for plots with their short descriptions. It is
possible to easily save the dashboard and share it with others. modelStudio
facilitates the process of Interactive Explanatory Model Analysis introduced
in Baniecki et al. (2023)
Generate Tidy Charts Inspired by 'IBCS'
There is a wide range of R packages created for data visualization, but still, there was no simple and easily accessible way to create clean and transparent charts - up to now. The 'tidycharts' package enables the user to generate charts compliant with International Business Communication Standards ('IBCS'). It means unified bar widths, colors, chart sizes, etc. Creating homogeneous reports has never been that easy! Additionally, users can apply semantic notation to indicate different data scenarios (plan, budget, forecast). What's more, it is possible to customize the charts by creating a personal color pallet with the possibility of switching to default options after the experiments. We wanted the package to be helpful in writing reports, so we also made joining charts in a one, clear image possible. All charts are generated in SVG format and can be shown in the 'RStudio' viewer pane or exported to HTML output of 'knitr'/'markdown'.
Compute SHAP Values for Your Tree-Based Models Using the 'TreeSHAP' Algorithm
An efficient implementation of the 'TreeSHAP' algorithm
introduced by Lundberg et al., (2020)
Reading, Quality Control and Preprocessing of MBA (Multiplex Bead Assay) Data
Speeds up the process of loading raw data from MBA (Multiplex Bead Assay) examinations, performs quality control checks, and automatically normalises the data, preparing it for more advanced, downstream tasks. The main objective of the package is to create a simple environment for a user, who does not necessarily have experience with R language. The package is developed within the project of the same name - 'PvSTATEM', which is an international project aiming for malaria elimination.
Explainers for Regression Models in HIV Research
A dedicated viral-explainer model tool designed to empower researchers in the field of HIV research, particularly in viral load and CD4 (Cluster of Differentiation 4) lymphocytes regression modeling. Drawing inspiration from the 'tidymodels' framework for rigorous model building of Max Kuhn and Hadley Wickham (2020) < https://www.tidymodels.org>, and the 'DALEXtra' tool for explainability by Przemyslaw Biecek (2020)