Found 202 packages in 0.01 seconds
Fitting Shared Atoms Nested Models via MCMC or Variational Bayes
An efficient tool for fitting nested mixture models based on a shared set of
atoms via Markov Chain Monte Carlo and variational inference algorithms.
Specifically, the package implements the common atoms model (Denti et al., 2023),
its finite version (similar to D'Angelo et al., 2023), and a hybrid finite-infinite
model (D'Angelo and Denti, 2024). All models implement univariate nested mixtures
with Gaussian kernels equipped with a normal-inverse gamma prior distribution
on the parameters. Additional functions are provided to help analyze the
results of the fitting procedure.
References:
Denti, Camerlenghi, Guindani, Mira (2023)
Spatial Clustering with Hidden Markov Random Field using Empirical Bayes
Spatial clustering with hidden markov random field fitted via EM algorithm, details of which can be found in Yi Yang (2021)
Calculates Safety Stopping Boundaries for a Single-Arm Trial using Bayes
Computation of stopping boundaries for a single-arm trial using a
Bayesian criterion; i.e., for each m<=n (n= total patient number of the
trial) the smallest number of observed toxicities is calculated
leading to the termination of the trial/accrual according to the specified
criteria. The probabilities of stopping the trial/accrual at and up until
(resp.) the m-th patient (m<=n) is also calculated. This design is more
conservative than the frequentist approach (using Clopper Pearson CIs)
which might be preferred as it concerns safety.See also Aamot et.al.(2010)
"Continuous monitoring of toxicity in clinical trials - simulating the risk
of stopping prematurely"
Multiple Testing Approach using Average Power Function (APF) and Bayes FDR Robust Estimation
Implements a multiple testing approach to the
choice of a threshold gamma on the p-values using the
Average Power Function (APF) and Bayes False Discovery
Rate (FDR) robust estimation. Function apf_fdr()
estimates both quantities from either raw data or
p-values. Function apf_plot() produces smooth graphs
and tables of the relevant results. Details of the methods
can be found in Quatto P, Margaritella N, et al. (2019)
Algorithm for Searching the Space of Gaussian Directed Acyclic Graph Models Through Moment Fractional Bayes Factors
We propose an objective Bayesian algorithm for searching the space of Gaussian directed acyclic graph (DAG) models. The algorithm uses moment fractional Bayes factors (MFBF) and is suitable for learning sparse graphs. The algorithm is implemented using Armadillo, an open-source C++ linear algebra library.
Differential Exon Usage Test for RNA-Seq Data via Empirical Bayes Shrinkage of the Dispersion Parameter
Differential exon usage test for RNA-Seq data via an empirical Bayes shrinkage method for the dispersion parameter the utilizes inclusion-exclusion data to analyze the propensity to skip an exon across groups. The input data consists of two matrices where each row represents an exon and the columns represent the biological samples. The first matrix is the count of the number of reads expressing the exon for each sample. The second matrix is the count of the number of reads that either express the exon or explicitly skip the exon across the samples, a.k.a. the total count matrix. Dividing the two matrices yields proportions representing the propensity to express the exon versus skipping the exon for each sample.
Spatial Dependence: Weighting Schemes, Statistics
A collection of functions to create spatial weights matrix
objects from polygon 'contiguities', from point patterns by distance and
tessellations, for summarizing these objects, and for permitting their
use in spatial data analysis, including regional aggregation by minimum
spanning tree; a collection of tests for spatial 'autocorrelation',
including global 'Morans I' and 'Gearys C' proposed by 'Cliff' and 'Ord'
(1973, ISBN: 0850860369) and (1981, ISBN: 0850860814), 'Hubert/Mantel'
general cross product statistic, Empirical Bayes estimates and
'Assunção/Reis' (1999)
Extreme Value Analysis
General functions for performing extreme value analysis. In particular, allows for inclusion of covariates into the parameters of the extreme-value distributions, as well as estimation through MLE, L-moments, generalized (penalized) MLE (GMLE), as well as Bayes. Inference methods include parametric normal approximation, profile-likelihood, Bayes, and bootstrapping. Some bivariate functionality and dependence checking (e.g., auto-tail dependence function plot, extremal index estimation) is also included. For a tutorial, see Gilleland and Katz (2016)
Classification, Regression and Feature Evaluation
A suite of machine learning algorithms written in C++ with the R interface contains several learning techniques for classification and regression. Predictive models include e.g., classification and regression trees with optional constructive induction and models in the leaves, random forests, kNN, naive Bayes, and locally weighted regression. All predictions obtained with these models can be explained and visualized with the 'ExplainPrediction' package. This package is especially strong in feature evaluation where it contains several variants of Relief algorithm and many impurity based attribute evaluation functions, e.g., Gini, information gain, MDL, and DKM. These methods can be used for feature selection or discretization of numeric attributes. The OrdEval algorithm and its visualization is used for evaluation of data sets with ordinal features and class, enabling analysis according to the Kano model of customer satisfaction. Several algorithms support parallel multithreaded execution via OpenMP. The top-level documentation is reachable through ?CORElearn.
Scaling Models and Classifiers for Textual Data
Scaling models and classifiers for sparse matrix objects representing
textual data in the form of a document-feature matrix. Includes original
implementations of 'Laver', 'Benoit', and Garry's (2003)