Found 8183 packages in 0.03 seconds
Hierarchical Modeling and Frequency Method Checking on Overdispersed Gaussian, Poisson, and Binomial Data
We utilize approximate Bayesian machinery to fit two-level conjugate hierarchical models on overdispersed Gaussian, Poisson, and Binomial data and evaluates whether the resulting approximate Bayesian interval estimates for random effects meet the nominal confidence levels via frequency coverage evaluation. The data that Rgbp assumes comprise observed sufficient statistic for each random effect, such as an average or a proportion of each group, without population-level data. The approximate Bayesian tool equipped with the adjustment for density maximization produces approximate point and interval estimates for model parameters including second-level variance component, regression coefficients, and random effect. For the Binomial data, the package provides an option to produce posterior samples of all the model parameters via the acceptance-rejection method. The package provides a quick way to evaluate coverage rates of the resultant Bayesian interval estimates for random effects via a parametric bootstrapping, which we call frequency method checking.
Simulate, Evaluate, and Analyze Dose Finding Trials with Bayesian MCPMod
Bayesian MCPMod (Fleischer et al. (2022)
A Shiny Application for End-to-End Bayesian Decision Network Analysis and Web-Deployment
A Shiny application for learning Bayesian Decision Networks from data. This package can be used for probabilistic reasoning (in the observational setting), causal inference (in the presence of interventions) and learning policy decisions (in Decision Network setting). Functionalities include end-to-end implementations for data-preprocessing, structure-learning, exact inference, approximate inference, extending the learned structure to Decision Networks and policy optimization using statistically rigorous methods such as bootstraps, resampling, ensemble-averaging and cross-validation. In addition to Bayesian Decision Networks, it also features correlation networks, community-detection, graph visualizations, graph exports and web-deployment of the learned models as Shiny dashboards.
Neural AutoRegressive Fractionally Integrated Moving Average Model
Methods and tools for forecasting univariate time series using the NARFIMA (Neural AutoRegressive Fractionally Integrated Moving Average) model. It combines neural networks with fractional differencing to capture both nonlinear patterns and long-term dependencies. The NARFIMA model supports seasonal adjustment, Box-Cox transformations, optional exogenous variables, and the computation of prediction intervals. In addition to the NARFIMA model, this package provides alternative forecasting models including NARIMA (Neural ARIMA), NBSTS (Neural Bayesian Structural Time Series), and NNaive (Neural Naive) for performance comparison across different modeling approaches. The methods are based on algorithms introduced by Chakraborty et al. (2025)
Mixed GAM Computation Vehicle with Automatic Smoothness Estimation
Generalized additive (mixed) models, some of their extensions and
other generalized ridge regression with multiple smoothing
parameter estimation by (Restricted) Marginal Likelihood,
Generalized Cross Validation and similar, or using iterated
nested Laplace approximation for fully Bayesian inference. See
Wood (2017)
Variance Estimation of FMT Method (Fully Moderated T-Statistic)
The FMT method computes posterior residual variances to be used in
the denominator of a moderated t-statistic from a linear model analysis of
gene expression data. It is an extension of the moderated t-statistic
originally proposed by Smyth (2004)
Hierarchical Bayesian Small Area Estimation
Functions to compute small area estimates based on a basic area or unit-level model. The model is fit using restricted maximum likelihood, or in a hierarchical Bayesian way. In the latter case numerical integration is used to average over the posterior density for the between-area variance. The output includes the model fit, small area estimates and corresponding mean squared errors, as well as some model selection measures. Additional functions provide means to compute aggregate estimates and mean squared errors, to minimally adjust the small area estimates to benchmarks at a higher aggregation level, and to graphically compare different sets of small area estimates.
Sensitivity Assessment to Unmeasured Confounding with Multiple Treatments
A sensitivity analysis approach for unmeasured confounding in observational data with multiple treatments and a binary outcome. This approach derives the general bias formula and provides adjusted causal effect estimates in response to various assumptions about the degree of unmeasured confounding. Nested multiple imputation is embedded within the Bayesian framework to integrate uncertainty about the sensitivity parameters and sampling variability. Bayesian Additive Regression Model (BART) is used for outcome modeling. The causal estimands are the conditional average treatment effects (CATE) based on the risk difference. For more details, see paper: Hu L et al. (2020) A flexible sensitivity analysis approach for unmeasured confounding with multiple treatments and a binary outcome with application to SEER-Medicare lung cancer data
Linear Mixed-Effects Models using 'Eigen' and S4
Fit linear and generalized linear mixed-effects models. The models and their components are represented using S4 classes and methods. The core computational algorithms are implemented using the 'Eigen' C++ library for numerical linear algebra and 'RcppEigen' "glue".
Estimating Speakers of Texts
Estimates the authors or speakers of texts. Methods developed in Huang, Perry, and Spirling (2020)