Found 140 packages in 0.02 seconds
R Interface to Proximal Interior Point Quadratic Programming Solver
An embedded proximal interior point quadratic programming solver, which can solve dense and sparse quadratic programs, described in Schwan, Jiang, Kuhn, and Jones (2023)
Processing Linear Features
Assists in the manipulation and processing of linear features with the help of the 'sf' package.
Makes use of linear referencing to extract data from most shape files.
Reference for this packages methods: Albeke, S.E. et al. (2010)
Random Survival Forest for Recurrent Events
A tool designed to analyze recurrent events when dealing with right-censored data and the potential presence of a terminal event (that prevents further occurrences, like death). It extends the random survival forest algorithm, adapting splitting rules and node estimators to handle complexities of recurrent events. The methodology is fully described in Murris, J., Bouaziz, O., Jakubczak, M., Katsahian, S., & Lavenu, A. (2024) (< https://hal.science/hal-04612431v1/document>).
Machine Learning and Mapping for Spatial Epidemiology
Provides tools for the integration, visualisation, and modelling of spatial epidemiological data using the method described in Azeez, A., & Noel, C. (2025). 'Predictive Modelling and Spatial Distribution of Pancreatic Cancer in Africa Using Machine Learning-Based Spatial Model'
Beyond the Border - Kernel Density Estimation for Urban Geography
The kernelSmoothing() function allows you to square and smooth geolocated data. It calculates a classical kernel smoothing (conservative) or a geographically weighted median. There are four major call modes of the function.
The first call mode is kernelSmoothing(obs, epsg, cellsize, bandwidth) for a classical kernel smoothing and automatic grid.
The second call mode is kernelSmoothing(obs, epsg, cellsize, bandwidth, quantiles) for a geographically weighted median and automatic grid.
The third call mode is kernelSmoothing(obs, epsg, cellsize, bandwidth, centroids) for a classical kernel smoothing and user grid.
The fourth call mode is kernelSmoothing(obs, epsg, cellsize, bandwidth, quantiles, centroids) for a geographically weighted median and user grid.
Geographically weighted summary statistics : a framework for localised exploratory data analysis, C.Brunsdon & al., in Computers, Environment and Urban Systems C.Brunsdon & al. (2002)
Publication Toolkit for Water, Sanitation and Hygiene (WASH) Data
A toolkit to set up an R data package in a consistent structure. Automates tasks like tidy data export, data dictionary documentation, README and website creation, and citation management.
Multivariate Fay Herriot Models for Small Area Estimation
Implements multivariate Fay-Herriot models for small area estimation. It uses empirical best linear unbiased prediction (EBLUP) estimator. Multivariate models consider the correlation of several target variables and borrow strength from auxiliary variables to improve the effectiveness of a domain sample size. Models which accommodated by this package are univariate model with several target variables (model 0), multivariate model (model 1), autoregressive multivariate model (model 2), and heteroscedastic autoregressive multivariate model (model 3). Functions provide EBLUP estimators and mean squared error (MSE) estimator for each model. These models were developed by Roberto Benavent and Domingo Morales (2015)
'Drat' R Archive Template
Creation and use of R Repositories via helper functions to insert packages into a repository, and to add repository information to the current R session. Two primary types of repositories are support: gh-pages at GitHub, as well as local repositories on either the same machine or a local network. Drat is a recursive acronym: Drat R Archive Template.
Find, Characterize, and Explore Extreme Events in Climate Projections
Inputs a directory of climate projection files and, for each, identifies and characterizes heat waves for specified study locations. The definition used to identify heat waves can be customized. Heat wave characterizations include several metrics of heat wave length, intensity, and timing in the year. The heat waves that are identified can be explored using a function to apply user-created functions across all generated heat wave files.This work was supported in part by grants from the National Institute of Environmental Health Sciences (R00ES022631), the National Science Foundation (1331399), and the Colorado State University Vice President for Research.
A Hypothesis Testing Framework for Validating an Assay for Precision
A common way of validating a biological assay for is through a
procedure, where m levels of an analyte are measured with n replicates at each
level, and if all m estimates of the coefficient of variation (CV) are less
than some prespecified level, then the assay is declared validated for precision
within the range of the m analyte levels. Two limitations of this procedure are:
there is no clear statistical statement of precision upon passing, and it is
unclear how to modify the procedure for assays with constant standard deviation.
We provide tools to convert such a procedure into a set of m hypothesis tests.
This reframing motivates the m:n:q procedure, which upon completion delivers
a 100q% upper confidence limit on the CV. Additionally, for a post-validation
assay output of y, the method gives an ``effective standard deviation interval''
of log(y) plus or minus r, which is a 68% confidence interval on log(mu), where
mu is the expected value of the assay output for that sample. Further, the m:n:q
procedure can be straightforwardly applied to constant standard deviation assays.
We illustrate these tools by applying them to a growth inhibition assay. This is
an implementation of the methods described in Fay, Sachs, and Miura (2018)