Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 108 packages in 0.02 seconds

tempted — by Pixu Shi, 2 years ago

Temporal Tensor Decomposition, a Dimensionality Reduction Tool for Longitudinal Multivariate Data

TEMPoral TEnsor Decomposition (TEMPTED), is a dimension reduction method for multivariate longitudinal data with varying temporal sampling. It formats the data into a temporal tensor and decomposes it into a summation of low-dimensional components, each consisting of a subject loading vector, a feature loading vector, and a continuous temporal loading function. These loadings provide a low-dimensional representation of subjects or samples and can be used to identify features associated with clusters of subjects or samples. TEMPTED provides the flexibility of allowing subjects to have different temporal sampling, so time points do not need to be binned, and missing time points do not need to be imputed.

forceplate — by Raphael Hartmann, 8 hours ago

Processing Force-Plate Data

Process raw force-plate data (txt-files) by segmenting them into trials and, if needed, calculating (user-defined) descriptive statistics of variables for user-defined time bins (relative to trigger onsets) for each trial. When segmenting the data a baseline correction, a filter, and a data imputation can be applied if needed. Experimental data can also be processed and combined with the segmented force-plate data. This procedure is suggested by Johannsen et al. (2023) and some of the options (e.g., choice of low-pass filter) are also suggested by Winter (2009) .

SpatialKWD — by Stefano Gualandi, 3 years ago

Spatial KWD for Large Spatial Maps

Contains efficient implementations of Discrete Optimal Transport algorithms for the computation of Kantorovich-Wasserstein distances between pairs of large spatial maps (Bassetti, Gualandi, Veneroni (2020), ). All the algorithms are based on an ad-hoc implementation of the Network Simplex algorithm. The package has four main helper functions: compareOneToOne() (to compare two spatial maps), compareOneToMany() (to compare a reference map with a list of other maps), compareAll() (to compute a matrix of distances between a list of maps), and focusArea() (to compute the KWD distance within a focus area). In non-convex maps, the helper functions first build the convex-hull of the input bins and pad the weights with zeros.

degross — by Philippe Lambert, 4 years ago

Density Estimation from GROuped Summary Statistics

Estimation of a density from grouped (tabulated) summary statistics evaluated in each of the big bins (or classes) partitioning the support of the variable. These statistics include class frequencies and central moments of order one up to four. The log-density is modelled using a linear combination of penalised B-splines. The multinomial log-likelihood involving the frequencies adds up to a roughness penalty based on the differences in the coefficients of neighbouring B-splines and the log of a root-n approximation of the sampling density of the observed vector of central moments in each class. The so-obtained penalized log-likelihood is maximized using the EM algorithm to get an estimate of the spline parameters and, consequently, of the variable density and related quantities such as quantiles, see Lambert, P. (2021) for details.

ggsurveillance — by Alexander Bartel, 18 days ago

Tools for Outbreak Investigation/Infectious Disease Surveillance

Create epicurves, epigantt charts, and diverging bar charts using 'ggplot2'. Prepare data for visualisation or other reporting for infectious disease surveillance and outbreak investigation (time series data). Includes tidy functions to solve date based transformations for common reporting tasks, like (A) seasonal date alignment for respiratory disease surveillance, (B) date-based case binning based on specified time intervals like isoweek, epiweek, month and more, (C) automated detection and marking of the new year based on the date/datetime axis of the 'ggplot2', (D) labelling of the last value of a time-series. An introduction on how to use epicurves can be found on the US CDC website (2012, < https://www.cdc.gov/training/quicklearns/epimode/index.html>).

aLBI — by Ataher Ali, 5 months ago

Estimating Length-Based Indicators for Fish Stock

Provides tools for estimating length-based indicators from length frequency data to assess fish stock status and manage fisheries sustainably. Implements methods from Cope and Punt (2009) for data-limited stock assessment and Froese (2004) for detecting overfishing using simple indicators. Key functions include: FrequencyTable(): Calculate the frequency table from the collected and also the extract the length frequency data from the frequency table with the upper length_range. A numeric value specifying the bin width for class intervals. If not provided, the bin width is automatically calculated using Sturges (1926) formula. CalPar(): Calculates various lengths used in fish stock assessment as biological length indicators such as asymptotic length (Linf), maximum length (Lmax), length at sexual maturity (Lm), and optimal length (Lopt). FishPar(): Calculates length-based indicators (LBIs) proposed by Froese (2004) such as the percentage of mature fish (Pmat), percentage of optimal length fish (Popt), percentage of mega spawners (Pmega), and the sum of these as Pobj. This function also estimates confidence intervals for different lengths, visualizes length frequency distributions, and provides data frames containing calculated values. FishSS(): Makes decisions based on input from Cope and Punt (2009) and parameters calculated by FishPar() (e.g., Pobj, Pmat, Popt, LM_ratio) to determine stock status as target spawning biomass (TSB40) and limit spawning biomass (LSB25). LWR(): Fits and visualizes length-weight relationships using linear regression, with options for log-transformation and customizable plotting.

detect — by Peter Solymos, 3 months ago

Analyzing Wildlife Data with Detection Error

Models for analyzing site occupancy and count data models with detection error, including single-visit based models (Lele et al. 2012 , Moreno et al. 2010 , Solymos et al. 2012 , Denes et al. 2016 ), conditional distance sampling and time-removal models (QPAD) (Solymos et al. 2013 , Solymos et al. 2018 ), and single bin QPAD (SQPAD) models (Lele & Solymos 2025). Package development was supported by the Alberta Biodiversity Monitoring Institute and the Boreal Avian Modelling Project.

cooltools — by Danail Obreschkow, 5 months ago

Practical Tools for Scientific Computations and Visualizations

Collection of routines for efficient scientific computations in physics and astrophysics. These routines include utility functions, numerical computation tools, as well as visualisation tools. They can be used, for example, for generating random numbers from spherical and custom distributions, information and entropy analysis, special Fourier transforms, two-point correlation estimation (e.g. as in Landy & Szalay (1993) ), binning & gridding of point sets, 2D interpolation, Monte Carlo integration, vector arithmetic and coordinate transformations. Also included is a non-exhaustive list of important constants and cosmological conversion functions. The graphics routines can be used to produce and export publication-ready scientific plots and movies, e.g. as used in Obreschkow et al. (2020, MNRAS Vol 493, Issue 3, Pages 4551–4569). These routines include special color scales, projection functions, and bitmap handling routines.

hierarchicalDS — by Paul B Conn, 6 years ago

Functions to Perform Hierarchical Analysis of Distance Sampling Data

Functions for performing hierarchical analysis of distance sampling data, with ability to use an areal spatial ICAR model on top of user supplied covariates to get at variation in abundance intensity. The detection model can be specified as a function of observer and individual covariates, where a parametric model is supposed for the population level distribution of covariate values. The model uses data augmentation and a reversible jump MCMC algorithm to sample animals that were never observed. Also included is the ability to include point independence (increasing correlation multiple observer's observations as a function of distance, with independence assumed for distance=0 or first distance bin), as well as the ability to model species misclassification rates using a multinomial logit formulation on data from double observers. There is also the the ability to include zero inflation, but this is only recommended for cases where sample sizes and spatial coverage of the survey are high.

APCtools — by Alexander Bauer, 6 months ago

Routines for Descriptive and Model-Based APC Analysis

Age-Period-Cohort (APC) analyses are used to differentiate relevant drivers for long-term developments. The 'APCtools' package offers visualization techniques and general routines to simplify the workflow of an APC analysis. Sophisticated functions are available both for descriptive and regression model-based analyses. For the former, we use density (or ridgeline) matrices and (hexagonally binned) heatmaps as innovative visualization techniques building on the concept of Lexis diagrams. Model-based analyses build on the separation of the temporal dimensions based on generalized additive models, where a tensor product interaction surface (usually between age and period) is utilized to represent the third dimension (usually cohort) on its diagonal. Such tensor product surfaces can also be estimated while accounting for further covariates in the regression model. See Weigert et al. (2021) for methodological details.