Quantified Risk Assessment Toolkit

An open source risk analysis toolkit based on the OpenFAIR taxonomy < https://www2.opengroup.org/ogsys/catalog/C13K> and risk assessment standard < https://www2.opengroup.org/ogsys/catalog/C13G>. Empowers an organization to perform a quantifiable, repeatable, and data-driven risk review.

David F. Severski


Evaluator Logo

Evaluator is an open source information security strategic risk analysis toolkit. Based on the OpenFAIR taxonomy and risk assessment standard, Evaluator empowers an organization to perform a quantifiable, repeatable, and data-driven review of its security program.

Three sample outputs of this toolkit are available:

  1. A detailed risk analysis template, located at RPubs
  2. A one page risk dashboard, also located at RPubs
  3. A demonstration copy of Scenario Explorer


The first iterations of Evaluator were created as a part of a major healthcare organization's decision to shift its already mature risk assessment program from reliance on qualitative labels to a quantitative model that would support more precise comparison of competing projects. This organization was able to use statistical sampling to gain greater insight into its information risks, to meet HIPAA compliance obligations and to provide manager to board level business leaders with the data needed to drive decision making.

Since its creation, versions of Evaluator have been deployed both inside and outside the healthcare field.

How to Use

The Evaluator toolkit consists of a series of processes implemented in the R language. Starting from an Excel workbook, risk data is imported and run through a simulation model to estimate the expected losses for each scenario. The results of these simulations are used to create a detailed analysis and a formal risk report. A starter analysis report, overview dashboard and sample Shiny application are all included in the toolkit.

Evaluator takes a domain-driven and framework-independent approach to strategic security risk analysis. It is compatible with ISO, COBIT, HITRUST CSF, PCI-DSS or any other model used for organizing an information security program. If you are able to describe the domains of your program and the controls and threat scenarios applicable to each domain, you will be able to use Evaluator!


This README does not define terms commonly used in an OpenFAIR analysis. While not a prerequisite, a review of OpenFAIR methodology and terminology is highly recommended. Familiarity with the R language is also very helpful.

Follow these six steps to running the toolkit:

  1. Prepare the environment
  2. Define your security domains
  3. Define your controls and risk scenarios
  4. Import the scenarios
  5. Run the simulations
  6. Analyze the results

Don't be intimidated by the process. Evaluator is with you at every step!

A working R interpreter is required. Evaluator should work on any current version of R (v3.3.2 as of this writing) and on any supported platform (Windows, MacOS, or Linux). This README assumes the use of RStudio IDE, but it is not strictly required (advanced users may manually knit files if they so choose).

Obtain the Evaluator toolkit via install.packages("evaluator"). If you'd like to use the development version, you can install the GitHub version via devtools::install_github("davidski/evaluator").

Define Your Security Domains

Evaluator needs to know the domains of your security program. These are the major buckets into which you subdivide your program, typically including areas such as Physical Security, Strategy, Policy, Business Continuity/Disaster Recovery, Technical Security, etc. Out of the box, Evaluator comes with a demonstration model based upon the HITRUST CSF. If you have a different domain structure (e.g. ISO2700x, NIST CSF, or COBIT), you will need to edit the data/domains.csv file to include the domain names and the domain IDs, and a shorthand abbreviation for the domain (such as POL for the Policy domain).

Define Your Controls and Risk Scenarios

Indentifying the controls (or capabilities) and key risk scenarios associated with each of your domains is critical to the final analysis. The elements are documented in an Excel workbook. The workbook includes one tab per domain, with each tab named after the domain IDs defined in the previous step. Each tab consists of a controls table and a threats table.

Controls Table

The key objectives of each domain are defined in the domain controls table. While the specific controls will be unique to each organization, the sample spreadsheet included in Evaluator may be used as a model. It is best to avoid copying every technical control from, for example, ISO 27001 or COBIT, since most control frameworks are too fine-grained to provide the high level overview that Evaluator delivers. In practice, 50 controls or less will usually be sufficient to describe organizations of up to one to two billion USD in size. Each control must have a unique ID and should be assigned a difficulty (DIFF) score thta ranks the maturity of the control on a CMM scale from Initial (lowest score) to Optimized (best of class).

Threats Table

The threats table consists of the potential loss scenarios addressed by each domain of your security program. Each scenario is made up of a descriptive field that describes who did what to whom, the threat community that executed the attack (e.g. external hacktivist, internal workforce member, third party vendor), how often the threat actor acts upon your assets (TEF), the strength with which they act against your assets (TCap), the losses incurred (LM) and a comma-separated list of the control IDs that prevent the scenario.

Import the Scenarios

To extract the spreadsheet into data files for further analysis, launch RStudio, open the 0-import_survey.Rmd notebook and click knit. The notebook will perform basic data validation on the workbook and extract the data. If there are data validation errors, the process will abort and an error message will be displayed. Correct the spreadsheet and re-knit the notebook to address the data validation errors.

Run the Simulations

Once the data is ready for the simulations, open the 1-simulate_risk.Rmd notebook and click knit. By default, Evaluator puts each scenario through 10,000 individual simulated years, modelling how often the threat actor will come into contact with your assets, the strength of the threat actor, the strength of your controls, and the losses involved in any situation where the threat strength exceeds your control strength. This simulation process can be computationally intense. The sample data set takes approximately 5-7 minutes on my primary development machines (last generation Windows-based platforms).

Analyze the Results

A template for a technical risk report is provided in 2-analyze_risk.Rmd. To use, open the document and click on Knit to Word. This will create a pre-populated risk report that identifies key scenarios and generates initial plots for to be used in creating a final analysis report. The risk_dashboard.Rmd file builds an executive summary dashboard.

For interactive exploration, open the explore_scenarios.Rmd file and click on Run Document to launch a local copy of the Scenario Explorer application. The Scenario Explorer app may be used to view information about the individual scenarios and provides a sample overview of the entire program.

For more in depth analysis, the following data files may be used directly from R or from external programs such as Tableau:

Data File Purpose
simulation_results.Rds Full details of each simulated scenario
scenarios_summary.Rds Quantitative values of each scenario, as converted from the qualitative spreadsheet

Advanced Customization

Evaluator makes several assumptions to get you up and running as quickly as possible. Advanced users may implement several different customizations including:

  • Risk tolerances - Organizational risk tolerances at a "medium" and "high" level are defined in data/risk_tolerances.csv. Risk tolerances are the aggregate economic loss thresholds defined by your organization. These are not necessarily the same as the size of potential losses from individual scenarios. A good proxy for risk tolerance is the budget authority implemented in your organization. The size of purchase signoff required at the executive level is generally a good indicator of the minimum floor for high risk tolerance.
  • Qualitative mappings - The translation of qualitative labels such as "Frequent" threat events and "Optimized" controls are defined in data/qualitative_mappings.csv. The values in this mapping may be changed but they must use lowercase and agree with the values used in the survey spreadsheet. Changing the number of levels used for any qualitative label (e.g. changing High/Medium/Low to High/Medium/Low/VeryLow) is unsupported.
  • Styling - Look and feel (fonts, colors, etc.) is defined in the styles/html-styles.css and styles/word-styles-reference.docx files.

Where to Go From Here

While Evaluator is a powerful tool, it does not explicitly attempt to address complex analysis of security risks, interaction between risk scenarios, rolling up multiple levels of risk into aggregations, modelling secondary losses or other advanced topics. As you become more comfortable with quantitative risk analysis, you may wish to dive deeper into these areas (and I hope you do!).

Commercial Software

  • RiskLens, founded by the original creator of the FAIR methodology




This project is governed by a Code of Conduct. By participating in this project you agree to abide by these terms.


The MIT License applies.


evaluator 0.1.1

  • Replaced dependency on modeest with a slimmer statip dependency
  • Removed dependency on magrittr
  • Default (overridable) locations of input and results directories now consistently set to "~/data" and "~/results" respectively
  • generate_report now takes an optional focus_scenario_ids parameter to override the scenarios on which special emphasis (usually executive interest) is desired.
  • Improve user experience for optional packages. User is now prompted to install optional dependencies (shiny, DT, flexdashboard, statip, rmarkdown, etc.) when running reporting functionality which requires them.
  • Substantial improvements in the sample analysis flow detailed in the usage vignette. You can now actually run all the commands as-is and have them work, which was previously "challenging".
  • summarize_all renamed to the more descriptive summarize_to_disk to avoid dplyr conflict
  • Add requirement for at least pander v0.6.1 for tibble compatability
  • Substantial refactoring on vignette
    • Added missing save steps
    • Corrected package name for Viewer to rstudioapi
    • Fixed a few incorrect placeholders
    • Properly committed compiled files to package for distribution and installation
  • Update all tidyverse calls to account for deprecations and split out of purrrlyr
  • Windows CI builds added via Appveyor
  • Use annotate_logticks over manual breaks on risk_dashboard

evaluator 0.1.0

  • Initial submission to CRAN

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.2.1 by David Severski, 5 hours ago


Report a bug at https://github.com/davidski/evaluator/issues

Browse source code at https://github.com/cran/evaluator

Authors: David Severski [aut, cre]

Documentation:   PDF Manual  

MIT + file LICENSE license

Imports dplyr, extrafont, ggplot2, mc2d, purrr, readr, readxl, rlang, scales, stringi, tibble, tidyr, viridis

Suggests DT, pander, psych, ggalt, flexdashboard, forcats, statip, knitr, purrrlyr, rmarkdown, shiny, testthat, covr, mockery

System requirements: pandoc

See at CRAN