Found 110 packages in 0.01 seconds
Soundscape Background Noise, Power, and Saturation
Accessible and flexible implementation of three ecoacoustic indices that are less commonly available in existing R frameworks: Background Noise, Soundscape Power and Soundscape Saturation. The functions were design to accommodate a variety of sampling designs. Users can tailor calculations by specifying spectrogram time bin size, amplitude thresholds and normality tests. By simplifying computation and standardizing reproducible methods, the package aims to support ecoacoustics studies. For more details about the indices read Towsey (2017) < https://eprints.qut.edu.au/110634/> and Burivalova (2017)
Fitting Single and Mixture of Generalised Lambda Distributions
The fitting algorithms considered in this package have two major objectives. One is to provide a smoothing device to fit distributions to data using the weight and unweighted discretised approach based on the bin width of the histogram. The other is to provide a definitive fit to the data set using the maximum likelihood and quantile matching estimation. Other methods such as moment matching, starship method, L moment matching are also provided. Diagnostics on goodness of fit can be done via qqplots, KS-resample tests and comparing mean, variance, skewness and kurtosis of the data with the fitted distribution. References include the following: Karvanen and Nuutinen (2008) "Characterizing the generalized lambda distribution by L-moments"
Calls Copy Number Alterations from Slide-Seq Data
This takes spatial single-cell-type RNA-seq data (specifically designed for Slide-seq v2) that calls copy number alterations (CNAs) using pseudo-spatial binning, clusters cellular units (e.g. beads) based on CNA profile, and visualizes spatial CNA patterns. Documentation about 'SlideCNA' is included in the the pre-print by Zhang et al. (2022,
Temporal Tensor Decomposition, a Dimensionality Reduction Tool for Longitudinal Multivariate Data
TEMPoral TEnsor Decomposition (TEMPTED), is a dimension reduction method for multivariate longitudinal data with varying temporal sampling. It formats the data into a temporal tensor and decomposes it into a summation of low-dimensional components, each consisting of a subject loading vector, a feature loading vector, and a continuous temporal loading function. These loadings provide a low-dimensional representation of subjects or samples and can be used to identify features associated with clusters of subjects or samples. TEMPTED provides the flexibility of allowing subjects to have different temporal sampling, so time points do not need to be binned, and missing time points do not need to be imputed.
Processing Force-Plate Data
Process raw force-plate data (txt-files) by segmenting them into trials and, if needed, calculating (user-defined) descriptive
statistics of variables for user-defined time bins (relative to trigger onsets) for each trial. When segmenting the data a baseline
correction, a filter, and a data imputation can be applied if needed. Experimental data can also be processed and combined with the
segmented force-plate data. This procedure is suggested by Johannsen et al. (2023)
Spatial KWD for Large Spatial Maps
Contains efficient implementations of Discrete Optimal Transport algorithms for the computation of Kantorovich-Wasserstein distances between pairs of large spatial maps (Bassetti, Gualandi, Veneroni (2020),
Tools for Outbreak Investigation/Infectious Disease Surveillance
Create epicurves, epigantt charts, and diverging bar charts using 'ggplot2'. Prepare data for visualisation or other reporting for infectious disease surveillance and outbreak investigation (time series data). Includes tidy functions to solve date based transformations for common reporting tasks, like (A) seasonal date alignment for respiratory disease surveillance, (B) date-based case binning based on specified time intervals like isoweek, epiweek, month and more, (C) automated detection and marking of the new year based on the date/datetime axis of the 'ggplot2', (D) labelling of the last value of a time-series. An introduction on how to use epicurves can be found on the US CDC website (2012, < https://www.cdc.gov/training/quicklearns/epimode/index.html>).
Density Estimation from GROuped Summary Statistics
Estimation of a density from grouped (tabulated) summary statistics evaluated in each of the big bins (or classes) partitioning the support of the variable. These statistics include class frequencies and central moments of order one up to four. The log-density is modelled using a linear combination of penalised B-splines. The multinomial log-likelihood involving the frequencies adds up to a roughness penalty based on the differences in the coefficients of neighbouring B-splines and the log of a root-n approximation of the sampling density of the observed vector of central moments in each class. The so-obtained penalized log-likelihood is maximized using the EM algorithm to get an estimate of the spline parameters and, consequently, of the variable density and related quantities such as quantiles, see Lambert, P. (2021)
Analyzing Wildlife Data with Detection Error
Models for analyzing site occupancy and count data models
with detection error, including
single-visit based models (Lele et al. 2012
Routines for Descriptive and Model-Based APC Analysis
Age-Period-Cohort (APC) analyses are used to differentiate relevant drivers for long-term developments.
The 'APCtools' package offers visualization techniques and general routines to simplify the workflow of an APC analysis.
Sophisticated functions are available both for descriptive and regression model-based analyses.
For the former, we use density (or ridgeline) matrices and (hexagonally binned) heatmaps as innovative visualization
techniques building on the concept of Lexis diagrams.
Model-based analyses build on the separation of the temporal dimensions based on generalized additive models,
where a tensor product interaction surface (usually between age and period) is utilized
to represent the third dimension (usually cohort) on its diagonal.
Such tensor product surfaces can also be estimated while accounting for
further covariates in the regression model.
See Weigert et al. (2021)