Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 434 packages in 0.04 seconds

condvis — by Mark O'Connell, 7 years ago

Conditional Visualization for Statistical Models

Exploring fitted models by interactively taking 2-D and 3-D sections in data space.

scdensity — by Mark A. Wolters, a year ago

Shape-Constrained Kernel Density Estimation

Implements methods for obtaining kernel density estimates subject to a variety of shape constraints (unimodality, bimodality, symmetry, tail monotonicity, bounds, and constraints on the number of inflection points). Enforcing constraints can eliminate unwanted waves or kinks in the estimate, which improves its subjective appearance and can also improve statistical performance. The main function scdensity() is very similar to the density() function in 'stats', allowing shape-restricted estimates to be obtained with little effort. The methods implemented in this package are described in Wolters and Braun (2017) , Wolters (2012) , and Hall and Huang (2002) < https://www3.stat.sinica.edu.tw/statistica/j12n4/j12n41/j12n41.htm>. See the scdensity() help for for full citations.

googleComputeEngineR — by Mark Edmondson, 7 years ago

R Interface with Google Compute Engine

Interact with the 'Google Compute Engine' API in R. Lets you create, start and stop instances in the 'Google Cloud'. Support for preconfigured instances, with templates for common R needs.

tpm — by Mark Egge, 2 years ago

FHWA TPM Score Calculation Functions

Contains functions for calculating the Federal Highway Administration (FHWA) Transportation Performance Management (TPM) performance measures. Currently, the package provides methods for the System Reliability and Freight (PM3) performance measures calculated from travel time data provided by The National Performance Management Research Data Set (NPMRDS), including Level of Travel Time Reliability (LOTTR), Truck Travel Time Reliability (TTTR), and Peak Hour Excessive Delay (PHED) metric scores for calculating statewide reliability performance measures. Implements < https://www.fhwa.dot.gov/tpm/guidance/pm3_hpms.pdf>.

crosstalk — by Carson Sievert, 5 months ago

Inter-Widget Interactivity for HTML Widgets

Provides building blocks for allowing HTML widgets to communicate with each other, with Shiny or without (i.e. static .html files). Currently supports linked brushing and filtering.

ascii — by Mark Clements, 2 years ago

Export R Objects to Several Markup Languages

Coerce R object to 'asciidoc', 'txt2tags', 'restructuredText', 'org', 'textile' or 'pandoc' syntax. Package comes with a set of drivers for 'Sweave'.

rmdHelpers — by Mark Peterson, 2 years ago

Helper Functions for Rmd Documents

A series of functions to aid in repeated tasks for Rmd documents. All details are to my personal preference, though I am happy to add flexibility if there are use cases I am missing. I will continue updating with new functions as I add utility functions for myself.

con2lki — by Mark Baas, 5 years ago

Calculate the Dutch Air Quality Index (LKI)

Calculates the dutch air quality index (LKI). This index was created on the basis of scientific studies of the health effects of air pollution. From these studies it can be deduced at what concentrations a certain percentage of the population can be affected. For more information see: < https://www.rivm.nl/bibliotheek/rapporten/2014-0050.pdf>.

MetProc — by Mark Chaffin, 10 years ago

Separate Metabolites into Likely Measurement Artifacts and True Metabolites

Split an untargeted metabolomics data set into a set of likely true metabolites and a set of likely measurement artifacts. This process involves comparing missing rates of pooled plasma samples and biological samples. The functions assume a fixed injection order of samples where biological samples are randomized and processed between intermittent pooled plasma samples. By comparing patterns of missing data across injection order, metabolites that appear in blocks and are likely artifacts can be separated from metabolites that seem to have random dispersion of missing data. The two main metrics used are: 1. the number of consecutive blocks of samples with present data and 2. the correlation of missing rates between biological samples and flanking pooled plasma samples.

deductive — by Mark van der Loo, a year ago

Data Correction and Imputation Using Deductive Methods

Attempt to repair inconsistencies and missing values in data records by using information from valid values and validation rules restricting the data.