Found 15 packages in 0.01 seconds
Thematic maps are geographical maps in which spatial data distributions are visualized. This package offers a flexible, layer-based, and easy to use approach to create thematic maps, such as choropleths and bubble maps.
A treemap is a space-filling visualization of hierarchical structures. This package offers great flexibility to draw treemaps.
Thematic Map Tools
Set of tools for reading and processing spatial data. The aim is to supply the workflow to create thematic maps. This package also facilitates 'tmap', the package for visualizing thematic maps.
Tableplot, a Visualization of Large Datasets
A tableplot is a visualisation of a (large) dataset with a dozen of variables, both numeric and categorical. Each column represents a variable and each row bin is an aggregate of a certain number of records. Numeric variables are visualized as bar charts, and categorical variables as stacked bar charts. Missing values are taken into account. Also supports large 'ffdf' datasets from the 'ff' package.
Prediction Model Selection and Performance Evaluation in Multiple Imputed Datasets
Provides functions to apply pooling or backward selection
of logistic, Cox regression and Multilevel (mixed models) prediction
models in multiply imputed datasets. Backward selection can be done
from the pooled model using Rubin's Rules (RR), the D1, D2, D3 and
promising median p-values method. The model can contain
continuous, dichotomous, categorical predictors and interaction terms
between all these type of predictors. Continuous predictors can also
be introduced as restricted cubic spline coefficients. It is also possible
to force (spline) predictors or interaction terms in the model during predictor
selection. The package includes a function to evaluate the stability
of the models using bootstrapping and cluster bootstrapping. The package further
contains functions to generate pooled model performance measures in multiply
imputed datasets as ROC/AUC, R-squares, Brier score, fit test values and
calibration plots for logistic regression models. A function to apply
Bootstrap internal validation is also available where two methods can be
used to combine bootstrapping and multiple imputation. One method, boot_MI,
first draws bootstrap samples and subsequently performs multiple imputation and with
the other method, MI_boot, first bootstrap samples are drawn from each imputed
dataset before results are combined. The adjusted intercept after shrinkage of
the pooled regression coefficients can be subsequently obtained. Backward selection
as part of internal validation is also an option. Also a function to externally
validate logistic prediction models in multiple imputed datasets is available.
Routines for Performing Empirical Calibration of Observational Study Estimates
Routines for performing empirical calibration of observational study estimates. By using a set of negative control hypotheses we can estimate the empirical null distribution of a particular observational study setup. This empirical null distribution can be used to compute a calibrated p-value, which reflects the probability of observing an estimated effect size when the null hypothesis is true taking both random and systematic error into account. A similar approach can be used to calibrate confidence intervals, using both negative and positive controls.
Rendering Parameterized SQL and Translation to Dialects
A rendering tool for parameterized SQL that also translates into different SQL dialects. These dialects include 'Microsoft Sql Server', 'Oracle', 'PostgreSql', 'Amazon RedShift', 'Apache Impala', 'IBM Netezza', 'Google BigQuery', 'Microsoft PDW', and 'SQLite'.
Support for Parallel Computation, Logging, and Function Automation
Support for parallel computation with progress bar, and option to stop or proceed on errors. Also provides logging to console and disk, and the logging persists in the parallel threads. Additional functions support function call automation with delayed execution (e.g. for executing functions in parallel).
Asynchronous Disk-Based Representation of Massive Data
Storing very large data objects on a local drive, while still making it possible to manipulate the data in an efficient manner.