Interpretability of complex machine learning models is a growing concern.
This package helps to understand key factors that drive the
decision made by complicated predictive model (so called black box model).
This is achieved through local approximations that are either based on
additive regression like model or CART like model that allows for
higher interactions. The methodology is based on Tulio Ribeiro, Singh, Guestrin (2016)
To get started, install stable CRAN version:
or the development version:
Features coming up next:
better support for comparing explanations for different models / different instances,
improved Shiny application (see
live_shiny function in development version).
If you have any bug reports, feature requests or ideas to improve the methodology, feel free to leave an issue.
Python implementation of LIME and info about the method: https://github.com/marcotcr/lime
sample_locally2to make results reproducible.
dataarguments defaults to
fit_explanationfunctions now carry more information (mainly explained instance) so some function calls were simplified (
add_predictionsalso returns black box model object (
fit_explanationis now more flexible, can take a list of hyperparameters for a chosen model.
add_predictionsimproved to handle more learners (for example ranger).
NEWS.mdfile to track changes to the package.
sample\_locallyuses data.table for faster local exploration.