Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 66 packages in 0.01 seconds

mdsOpt — by Andrzej Dudek, 2 months ago

Searching for Optimal MDS Procedure for Metric and Interval-Valued Data

Selecting the optimal multidimensional scaling (MDS) procedure for metric data via metric MDS (ratio, interval, mspline) and nonmetric MDS (ordinal). Selecting the optimal multidimensional scaling (MDS) procedure for interval-valued data via metric MDS (ratio, interval, mspline).Selecting the optimal multidimensional scaling procedure for interval-valued data by varying all combinations of normalization and optimization methods.Selecting the optimal MDS procedure for statistical data referring to the evaluation of tourist attractiveness of Lower Silesian counties. (Borg, I., Groenen, P.J.F., Mair, P. (2013) , Walesiak, M. (2016) , Walesiak, M. (2017) ).

reproducer — by Lech Madeyski, 2 years ago

Reproduce Statistical Analyses and Meta-Analyses

Includes data analysis and meta-analysis functions (e.g., to calculate effect sizes and 95% Confidence Intervals (CI) on Standardised Effect Sizes (d) for AB/BA cross-over repeated-measures experimental designs), data presentation functions (e.g., density curve overlaid on histogram),and the data sets analyzed in different research papers in software engineering (e.g., related to software defect prediction or multi- site experiment concerning the extent to which structured abstracts were clearer and more complete than conventional abstracts) to streamline reproducible research in software engineering.

discAUC — by Jonathan E. Friedel, a year ago

Linear and Non-Linear AUC for Discounting Data

Area under the curve (AUC; Myerson et al., 2001) is a popular measure used in discounting research. Although the calculation of AUC is standardized, there are differences in AUC based on some assumptions. For example, Myerson et al. (2001) assumed that (with delay discounting data) a researcher would impute an indifference point at zero delay equal to the value of the larger, later outcome. However, this practice is not clearly followed. This imputed zero-delay indifference point plays an important role in log and ordinal versions of AUC. Ordinal and log versions of AUC are described by Borges et al. (2016). The package can calculate all three versions of AUC [and includes a new version: IHS(AUC)], impute indifference points when x = 0, calculate ordinal AUC in the case of Halton sampling of x-values, and account for probability discounting AUC.

scicomptools — by Angel Chen, 4 months ago

Tools Developed by the NCEAS Scientific Computing Support Team

Set of tools to import, summarize, wrangle, and visualize data. These functions were originally written based on the needs of the various synthesis working groups that were supported by the National Center for Ecological Analysis and Synthesis (NCEAS). These tools are meant to be useful inside and outside of the context for which they were designed.

cops — by Thomas Rusch, a year ago

Cluster Optimized Proximity Scaling

Multidimensional scaling (MDS) methods that aim at pronouncing the clustered appearance of the configuration (Rusch, Mair & Hornik, 2021, ). They achieve this by transforming proximities/distances with explicit power functions and penalizing the fitting criterion with a clusteredness index, the OPTICS Cordillera (Rusch, Hornik & Mair, 2018, ). There are two variants: One for finding the configuration directly (COPS-C) with given explicit power transformations and implicit ratio, interval and non-metric optimal scaling transformations (Borg & Groenen, 2005, ISBN:978-0-387-28981-6), and one for using the augmented fitting criterion to find optimal hyperparameters for the explicit transformations (P-COPS). The package contains various functions, wrappers, methods and classes for fitting, plotting and displaying a large number of different MDS models (most of the functionality in smacofx) in the COPS framework. The package further contains a function for pattern search optimization, the ``Adaptive Luus-Jaakola Algorithm'' (Rusch, Mair & Hornik, 2021,) and a functions to calculate the phi-distances for count data or histograms.

httk — by John Wambaugh, 6 days ago

High-Throughput Toxicokinetics

Pre-made models that can be rapidly tailored to various chemicals and species using chemical-specific in vitro data and physiological information. These tools allow incorporation of chemical toxicokinetics ("TK") and in vitro-in vivo extrapolation ("IVIVE") into bioinformatics, as described by Pearce et al. (2017) (). Chemical-specific in vitro data characterizing toxicokinetics have been obtained from relatively high-throughput experiments. The chemical-independent ("generic") physiologically-based ("PBTK") and empirical (for example, one compartment) "TK" models included here can be parameterized with in vitro data or in silico predictions which are provided for thousands of chemicals, multiple exposure routes, and various species. High throughput toxicokinetics ("HTTK") is the combination of in vitro data and generic models. We establish the expected accuracy of HTTK for chemicals without in vivo data through statistical evaluation of HTTK predictions for chemicals where in vivo data do exist. The models are systems of ordinary differential equations that are developed in MCSim and solved using compiled (C-based) code for speed. A Monte Carlo sampler is included for simulating human biological variability (Ring et al., 2017 ) and propagating parameter uncertainty (Wambaugh et al., 2019 ). Empirically calibrated methods are included for predicting tissue:plasma partition coefficients and volume of distribution (Pearce et al., 2017 ). These functions and data provide a set of tools for using IVIVE to convert concentrations from high-throughput screening experiments (for example, Tox21, ToxCast) to real-world exposures via reverse dosimetry (also known as "RTK") (Wetmore et al., 2015 ).