Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 46 packages in 0.11 seconds

SpatialBSS — by Klaus Nordhausen, a year ago

Blind Source Separation for Multivariate Spatial Data

Blind source separation for multivariate spatial data based on simultaneous/joint diagonalization of (robust) local covariance matrices. This package is an implementation of the methods described in Bachoc, Genton, Nordhausen, Ruiz-Gazen and Virta (2020) .

depCensoring — by Negera Wakgari Deresa, 4 months ago

Statistical Methods for Survival Data with Dependent Censoring

Several statistical methods for analyzing survival data under various forms of dependent censoring are implemented in the package. In addition to accounting for dependent censoring, it offers tools to adjust for unmeasured confounding factors. The implemented approaches allow users to estimate the dependency between survival time and dependent censoring time, based solely on observed survival data. For more details on the methods, refer to Deresa and Van Keilegom (2021) , Czado and Van Keilegom (2023) , Crommen et al. (2024) , Deresa and Van Keilegom (2024) , Willems et al. (2025) , Ding and Van Keilegom (2025) and D'Haen et al. (2025) .

cquad — by Francesco Bartolucci, 3 years ago

Conditional Maximum Likelihood for Quadratic Exponential Models for Binary Panel Data

Estimation, based on conditional maximum likelihood, of the quadratic exponential model proposed by Bartolucci, F. & Nigro, V. (2010, Econometrica) and of a simplified and a modified version of this model. The quadratic exponential model is suitable for the analysis of binary longitudinal data when state dependence (further to the effect of the covariates and a time-fixed individual intercept) has to be taken into account. Therefore, this is an alternative to the dynamic logit model having the advantage of easily allowing conditional inference in order to eliminate the individual intercepts and then getting consistent estimates of the parameters of main interest (for the covariates and the lagged response). The simplified version of this model does not distinguish, as the original model does, between the last time occasion and the previous occasions. The modified version formulates in a different way the interaction terms and it may be used to test in a easy way state dependence as shown in Bartolucci, F., Nigro, V. & Pigini, C. (2018, Econometric Reviews) . The package also includes estimation of the dynamic logit model by a pseudo conditional estimator based on the quadratic exponential model, as proposed by Bartolucci, F. & Nigro, V. (2012, Journal of Econometrics) . For large time dimensions of the panel, the computation of the proposed models involves a recursive function from Krailo M. D., & Pike M. C. (1984, Journal of the Royal Statistical Society. Series C (Applied Statistics)) and Bartolucci F., Valentini, F. & Pigini C. (2021, Computational Economics .

kmeRtone — by Aleksandr Sahakyan, 2 years ago

Multi-Purpose and Flexible k-Meric Enrichment Analysis Software

A multi-purpose and flexible k-meric enrichment analysis software. 'kmeRtone' measures the enrichment of k-mers by comparing the population of k-mers in the case loci with a carefully devised internal negative control group, consisting of k-mers from regions close to, yet sufficiently distant from, the case loci to mitigate any potential sequencing bias. This method effectively captures both the local sequencing variations and broader sequence influences, while also correcting for potential biases, thereby ensuring more accurate analysis. The core functionality of 'kmeRtone' is the SCORE() function, which calculates the susceptibility scores for k-mers in case and control regions. Case regions are defined by the genomic coordinates provided in a file by the user and the control regions can be constructed relative to the case regions or provided directly. The k-meric susceptibility scores are calculated by using a one-proportion z-statistic. 'kmeRtone' is highly flexible by allowing users to also specify their target k-mer patterns and quantify the corresponding k-mer enrichment scores in the context of these patterns, allowing for a more comprehensive approach to understanding the functional implications of specific DNA sequences on a genomic scale (e.g., CT motifs upon UV radiation damage). Adib A. Abdullah, Patrick Pflughaupt, Claudia Feng, Aleksandr B. Sahakyan (2024) Bioinformatics (submitted).

multimorbidity — by Wyatt Bensken, 3 years ago

Harmonizing Various Comorbidity, Multimorbidity, and Frailty Measures

Identifying comorbidities, frailty, and multimorbidity in claims and administrative data is often a duplicative process. The functions contained in this package are meant to first prepare the data to a format acceptable by all other packages, then provide a uniform and simple approach to generate comorbidity and multimorbidity metrics based on these claims data. The package is ever evolving to include new metrics, and is always looking for new measures to include. The citations used in this package include the following publications: Anne Elixhauser, Claudia Steiner, D. Robert Harris, Rosanna M. Coffey (1998) , Brian J Moore, Susan White, Raynard Washington, et al. (2017) , Mary E. Charlson, Peter Pompei, Kathy L. Ales, C. Ronald MacKenzie (1987) , Richard A. Deyo, Daniel C. Cherkin, Marcia A. Ciol (1992) , Hude Quan, Vijaya Sundararajan, Patricia Halfon, et al. (2005) , Dae Hyun Kim, Sebastian Schneeweiss, Robert J Glynn, et al. (2018) , Melissa Y Wei, David Ratz, Kenneth J Mukamal (2020) , Kathryn Nicholson, Amanda L. Terry, Martin Fortin, et al. (2015) , Martin Fortin, José Almirall, and Kathryn Nicholson (2017).

sivirep — by Geraldine Gómez-Millán, a year ago

Data Wrangling and Automated Reports from 'SIVIGILA' Source

Data wrangling, pre-processing, and generating automated reports from Colombia's epidemiological surveillance system, 'SIVIGILA' < https://portalsivigila.ins.gov.co/>. It provides a customizable R Markdown template for analysis and automatic generation of epidemiological reports that can be adapted to local, regional, and national contexts. This tool offers a standardized and reproducible workflow that helps to reduce manual labor and potential errors in report generation, improving their efficiency and consistency.