Found 108 packages in 0.02 seconds
Temporal Tensor Decomposition, a Dimensionality Reduction Tool for Longitudinal Multivariate Data
TEMPoral TEnsor Decomposition (TEMPTED), is a dimension reduction method for multivariate longitudinal data with varying temporal sampling. It formats the data into a temporal tensor and decomposes it into a summation of low-dimensional components, each consisting of a subject loading vector, a feature loading vector, and a continuous temporal loading function. These loadings provide a low-dimensional representation of subjects or samples and can be used to identify features associated with clusters of subjects or samples. TEMPTED provides the flexibility of allowing subjects to have different temporal sampling, so time points do not need to be binned, and missing time points do not need to be imputed.
Processing Force-Plate Data
Process raw force-plate data (txt-files) by segmenting them into trials and, if needed, calculating (user-defined) descriptive
statistics of variables for user-defined time bins (relative to trigger onsets) for each trial. When segmenting the data a baseline
correction, a filter, and a data imputation can be applied if needed. Experimental data can also be processed and combined with the
segmented force-plate data. This procedure is suggested by Johannsen et al. (2023)
Spatial KWD for Large Spatial Maps
Contains efficient implementations of Discrete Optimal Transport algorithms for the computation of Kantorovich-Wasserstein distances between pairs of large spatial maps (Bassetti, Gualandi, Veneroni (2020),
Density Estimation from GROuped Summary Statistics
Estimation of a density from grouped (tabulated) summary statistics evaluated in each of the big bins (or classes) partitioning the support of the variable. These statistics include class frequencies and central moments of order one up to four. The log-density is modelled using a linear combination of penalised B-splines. The multinomial log-likelihood involving the frequencies adds up to a roughness penalty based on the differences in the coefficients of neighbouring B-splines and the log of a root-n approximation of the sampling density of the observed vector of central moments in each class. The so-obtained penalized log-likelihood is maximized using the EM algorithm to get an estimate of the spline parameters and, consequently, of the variable density and related quantities such as quantiles, see Lambert, P. (2021)
Tools for Outbreak Investigation/Infectious Disease Surveillance
Create epicurves, epigantt charts, and diverging bar charts using 'ggplot2'. Prepare data for visualisation or other reporting for infectious disease surveillance and outbreak investigation (time series data). Includes tidy functions to solve date based transformations for common reporting tasks, like (A) seasonal date alignment for respiratory disease surveillance, (B) date-based case binning based on specified time intervals like isoweek, epiweek, month and more, (C) automated detection and marking of the new year based on the date/datetime axis of the 'ggplot2', (D) labelling of the last value of a time-series. An introduction on how to use epicurves can be found on the US CDC website (2012, < https://www.cdc.gov/training/quicklearns/epimode/index.html>).
Estimating Length-Based Indicators for Fish Stock
Provides tools for estimating length-based indicators from length frequency data to assess fish stock status and manage fisheries sustainably. Implements methods from Cope and Punt (2009)
Analyzing Wildlife Data with Detection Error
Models for analyzing site occupancy and count data models
with detection error, including
single-visit based models (Lele et al. 2012
Practical Tools for Scientific Computations and Visualizations
Collection of routines for efficient scientific computations in physics and astrophysics. These routines include utility functions, numerical computation tools, as well as visualisation tools. They can be used, for example, for generating random numbers from spherical and custom distributions, information and entropy analysis, special Fourier transforms, two-point correlation estimation (e.g. as in Landy & Szalay (1993)
Functions to Perform Hierarchical Analysis of Distance Sampling Data
Functions for performing hierarchical analysis of distance sampling data, with ability to use an areal spatial ICAR model on top of user supplied covariates to get at variation in abundance intensity. The detection model can be specified as a function of observer and individual covariates, where a parametric model is supposed for the population level distribution of covariate values. The model uses data augmentation and a reversible jump MCMC algorithm to sample animals that were never observed. Also included is the ability to include point independence (increasing correlation multiple observer's observations as a function of distance, with independence assumed for distance=0 or first distance bin), as well as the ability to model species misclassification rates using a multinomial logit formulation on data from double observers. There is also the the ability to include zero inflation, but this is only recommended for cases where sample sizes and spatial coverage of the survey are high.
Routines for Descriptive and Model-Based APC Analysis
Age-Period-Cohort (APC) analyses are used to differentiate relevant drivers for long-term developments.
The 'APCtools' package offers visualization techniques and general routines to simplify the workflow of an APC analysis.
Sophisticated functions are available both for descriptive and regression model-based analyses.
For the former, we use density (or ridgeline) matrices and (hexagonally binned) heatmaps as innovative visualization
techniques building on the concept of Lexis diagrams.
Model-based analyses build on the separation of the temporal dimensions based on generalized additive models,
where a tensor product interaction surface (usually between age and period) is utilized
to represent the third dimension (usually cohort) on its diagonal.
Such tensor product surfaces can also be estimated while accounting for
further covariates in the regression model.
See Weigert et al. (2021)