An open source risk analysis toolkit based on the OpenFAIR ontology < https://www2.opengroup.org/ogsys/catalog/C13K> and risk assessment standard < https://www2.opengroup.org/ogsys/catalog/C13G>. Empowers an organization to perform a quantifiable, repeatable, and data-driven risk review.
Evaluator is an open source quantitative risk analysis toolkit. Based on the OpenFAIR taxonomy and risk assessment standard, Evaluator empowers an organization to perform a quantifiable, repeatable, and data-driven risk review.
Three sample outputs of this toolkit are available:
Install Evaluator via the standard CRAN mechanisms. If you wish to use
the optional, but recommended, reporting functions, use the
dependencies = TRUE option to install the needed additional packages
install.packages("evaluator", dependencies = TRUE)
If you wish to run the development (and potentially bleeding edge)
version of Evaluator, you can install directly from GitHub via the
# install.pacakges("devtools")devtools::install_github("davidski/evaluator", dependencies = TRUE)
Optionally, a prototype Docker image with all dependencies pre-installed is available on the Docker Hub.
The primary workflow for Evaluator involves gathering data in Excel then running the analysis from within the R and Evaluator environment:
A detailed guide is available in the vignette accessed via
vignette("usage", package="evaluator"). A short screencast showing the
basic workflow (not including generation of reports) is available
While Evaluator is a powerful tool, it does not attempt to address interactions between risk scenarios, rolling up multiple levels of risk into aggregations, or other advanced topics. As you become more comfortable with quantitative risk analysis, you may wish to dive deeper into these areas (and I hope you do!). The following resources may help you explore these and other topics in risk management.
This project is governed by a Code of Conduct. By participating in this project you agree to abide by these terms.
The MIT License applies.
read_quantitative_inputs()added to allow easier skipping over qualitative inputs and going straight to quantitative inputs, such as generated by
summarize_domains()was still referencing an ALE column which did not exist on the summary roll up (aggregating ALE is possible as a strict sum across scenarios).
capabilitiestable has renamed the
capability_idto be consistent with other ID columns throughout the schema. Survey (Excel) users are not impacted by this change.
run_simulations()function accounts for this change. Users using the standard flow will not be impacted.
summarize_domains()- Incorporates the now removed
calculate_weak_domains()functions. As part of this consolidation, the
mean_diff_exceedancecalculations are improved by handling NAs in some simulations without zeroing out the entire calculation.
summarize_domains()- Properly calculates
mean_diff_exceedancewhen there all threat events are successfully avoided.
generate_heatmap()- Takes a
domain_summaryinput rather than the deprecated
statsnamespaces was practically impossible. All atomic OpenFAIR functions have been refactored to take a fully qualified function (i.e.
explore_scenarios()was trying to assign a mappings variable to the global context, which rightly failed. Scaled back the assignment to the current scope.
select_loss_opportunities()properly returns an NA for the threat & difficulty exceedance calculations when there are no threat events in a given simulated period.
summarize_scenarios()- correctly handles scenarios in which no threat events occur in a given simulation. This bug was limited to
mean_tc_exceedance. For previously run simulations, re-summarizing the
scenario_resultswill generate corrected values.
calculate_max_losses()- no longer returns a duplicate set of results if not passed any outliers.
ale_maximumparameter allows an absolute cap on per simulation annual losses to be set. This is an interim step in lieu of full hierarchical interaction modeling.
run_simulations()- Errors encountered during runs are now reported better.
run_simulations()- Implements parallel execution via the
furrrpackage. To run simulations across all cores of a local machine, load
plan(multicore)before launching an analysis. For more information, see the
sample_tc()check if they are asked to generate zero requested samples, bypassing calling the underlying generation function. This avoids problems with generating functions which do not gracefully handle being asked to sample a non positive number (zero) of events.
load_data()now fully specifies the expected CSV file formats, avoiding possible surprises and making invocations less noisy on the console.
rlang::.dataconstructs, making CRAN checks much simpler.
renderfrom from trying to write to the package install directory.
tempdir(). This can be overwritten on the function call if needed.
mappingscontains sample qualitative to quantitative parameters.
pkgdowngenerated web documentation at https://evaluator.severski.net.
formatparameter to specify HTML, PDF, or Word
stylesparameter allows user to supply custom CSS or Word reference document to customize styles and fonts
devtools::install_github("rstudio/rmarkdown", "b84f706")or greater.
create_templates()function for populating starter/sample files, making starting a fresh analysis easier than ever!
extrafontdatabase for better font detection
sansfamily when none of the preferred options are available
tcltkprogress bar in favor of console-compatible
dplyr::progress_estimated(). Also enables reduced package dependencies.
generate_reportdefaults to creating a MS Word document as the output type
modeestwith a slimmer
generate_report()now takes an optional
focus_scenario_idsparameter to override the scenarios on which special emphasis (usually executive interest) is desired.
summarize_all()renamed to the more descriptive
summarize_to_disk()to avoid dplyr conflict
annotate_logticks()over manual breaks on risk_dashboard