The package ExplainPrediction contains methods to generate explanations for individual predictions of
classification and regression models.
Details
The explanation methodology used is based on measuring contributions of individual features on
an individual predictions. The contributions of all attributes present an explanation of individual prediction.
Explanations can be visualized with a nomogram. If we average the explanations, we get an explanation of the
whole model. Two explanation methods are implemented:
EXPLAIN (described in Explaining Classifications For Individual Instances). The EXPLAIN method is much faster
than IME and works for any number of attributes in the model, but cannot explain dependencies expressed disjunctively
in the model. For details see explainVis.
IME can in principle explain any type of dependencies in the model.
It uses sampling based method to avoid exhaustive search for dependencies and
works reasonably fast for up to a few dozen attributes in the model. The details see the references.
Currently prediction models implemented in package CORElearn are supported,
for other models a wrapper of class CoreModel has to be created.
The wrapper has to present the model with a list with the following components:
formula of class formula representing the response and predictive variables,
noClasses number of class values in class of classification model, 0 in case of regression,
class.lev the levels used in representation of class values (see factor),
Additionally it has to implement function predict which returns the same components as the function
predict.CoreModel in the package CORElearn.
Marko Robnik-Sikonja, Igor Kononenko: Explaining Classifications For Individual Instances.
IEEE Transactions on Knowledge and Data Engineering, 20:589-600, 2008
Erik Strumbelj, Igor Kononenko, Igor, Marko Robnik-Sikonja: Explaining Instance Classifications with Interactions of
Subsets of Feature Values. Data and Knowledge Engineering, 68(10):886-904, Oct. 2009
Erik Strumbelj, Igor Kononenko: An Eficient Explanation of Individual Classifications using Game Theory,
Journal of Machine Learning Research, 11(1):1-18, 2010.