Found 19 packages in 0.01 seconds
Thematic Maps
Thematic maps are geographical maps in which spatial data distributions are visualized. This package offers a flexible, layer-based, and easy to use approach to create thematic maps, such as choropleths and bubble maps.
Thematic Map Tools
Set of tools for reading and processing spatial data. The aim is to supply the workflow to create thematic maps. This package also facilitates 'tmap', the package for visualizing thematic maps.
Treemap Visualization
A treemap is a space-filling visualization of hierarchical structures. This package offers great flexibility to draw treemaps.
Create and Explore Geographic Zoning Systems
Functions, documentation and example data to help divide
geographic space into discrete polygons (zones).
The functions are motivated by research into the merits of different zoning systems
Memory-Efficient Storage of Large Data on Disk and Fast Access Functions
The ff package provides data structures that are stored on disk but behave (almost) as if they were in RAM by transparently mapping only a section (pagesize) in main memory - the effective virtual memory consumption per ff object. ff supports R's standard atomic data types 'double', 'logical', 'raw' and 'integer' and non-standard atomic types boolean (1 bit), quad (2 bit unsigned), nibble (4 bit unsigned), byte (1 byte signed with NAs), ubyte (1 byte unsigned), short (2 byte signed with NAs), ushort (2 byte unsigned), single (4 byte float with NAs). For example 'quad' allows efficient storage of genomic data as an 'A','T','G','C' factor. The unsigned types support 'circular' arithmetic. There is also support for close-to-atomic types 'factor', 'ordered', 'POSIXct', 'Date' and custom close-to-atomic types. ff not only has native C-support for vectors, matrices and arrays with flexible dimorder (major column-order, major row-order and generalizations for arrays). There is also a ffdf class not unlike data.frames and import/export filters for csv files. ff objects store raw data in binary flat files in native encoding, and complement this with metadata stored in R as physical and virtual attributes. ff objects have well-defined hybrid copying semantics, which gives rise to certain performance improvements through virtualization. ff objects can be stored and reopened across R sessions. ff files can be shared by multiple ff R objects (using different data en/de-coding schemes) in the same process or from multiple R processes to exploit parallelism. A wide choice of finalizer options allows to work with 'permanent' files as well as creating/removing 'temporary' ff files completely transparent to the user. On certain OS/Filesystem combinations, creating the ff files works without notable delay thanks to using sparse file allocation. Several access optimization techniques such as Hybrid Index Preprocessing and Virtualization are implemented to achieve good performance even with large datasets, for example virtual matrix transpose without touching a single byte on disk. Further, to reduce disk I/O, 'logicals' and non-standard data types get stored native and compact on binary flat files i.e. logicals take up exactly 2 bits to represent TRUE, FALSE and NA. Beyond basic access functions, the ff package also provides compatibility functions that facilitate writing code for ff and ram objects and support for batch processing on ff objects (e.g. as.ram, as.ff, ffapply). ff interfaces closely with functionality from package 'bit': chunked looping, fast bit operations and coercions between different objects that can store subscript information ('bit', 'bitwhich', ff 'boolean', ri range index, hi hybrid index). This allows to work interactively with selections of large datasets and quickly modify selection criteria. Further high-performance enhancements can be made available upon request.
Rendering Parameterized SQL and Translation to Dialects
A rendering tool for parameterized SQL that also translates into different SQL dialects. These dialects include 'Microsoft Sql Server', 'Oracle', 'PostgreSql', 'Amazon RedShift', 'Apache Impala', 'IBM Netezza', 'Google BigQuery', 'Microsoft PDW', 'Apache Spark', and 'SQLite'.
Interpreting Time Series and Autocorrelated Data Using GAMMs
GAMM (Generalized Additive Mixed Modeling; Lin & Zhang, 1999) as implemented in the R package 'mgcv' (Wood, S.N., 2006; 2011) is a nonlinear regression analysis which is particularly useful for time course data such as EEG, pupil dilation, gaze data (eye tracking), and articulography recordings, but also for behavioral data such as reaction times and response data. As time course measures are sensitive to autocorrelation problems, GAMMs implements methods to reduce the autocorrelation problems. This package includes functions for the evaluation of GAMM models (e.g., model comparisons, determining regions of significance, inspection of autocorrelational structure in residuals) and interpreting of GAMMs (e.g., visualization of complex interactions, and contrasts).
Support for Parallel Computation, Logging, and Function Automation
Support for parallel computation with progress bar, and option to stop or proceed on errors. Also provides logging to console and disk, and the logging persists in the parallel threads. Additional functions support function call automation with delayed execution (e.g. for executing functions in parallel).
Prediction Model Pooling, Selection and Performance Evaluation Across Multiply Imputed Datasets
Pooling, backward and forward selection of linear, logistic and Cox regression models in
multiply imputed datasets. Backward and forward selection can be done
from the pooled model using Rubin's Rules (RR), the D1, D2, D3, D4 and
the median p-values method. This is also possible for Mixed models.
The models can contain continuous, dichotomous, categorical and restricted
cubic spline predictors and interaction terms between all these type of predictors.
The stability of the models can be evaluated using bootstrapping and cluster
bootstrapping. The package further contains functions to pool the model performance
as ROC/AUC, R-squares, scaled Brier score, H&L test and calibration plots for logistic
regression models. Internal validation can be done with cross-validation or bootstrapping.
The adjusted intercept after shrinkage of pooled regression coefficients can be obtained.
Backward and forward selection as part of internal validation is possible.
A function to externally validate logistic prediction models in multiple imputed
datasets is available and a function to compare models.
Eekhout (2017)
Data and Statistical Analyses after Multiple Imputation
Statistical Analyses and Pooling after Multiple Imputation. A large variety
of repeated statistical analysis can be performed and finally pooled. Statistical analysis
that are available are, among others, Levene's test, Odds and Risk Ratios, One sample
proportions, difference between proportions and linear and logistic regression models.
Functions can also be used in combination with the Pipe operator.
More and more statistical analyses and pooling functions will be added over time.
Heymans (2007)