References
Random forests. Machine Learning
Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32.
Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32.
Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32.
Explainability and Interpretatability
A problem remains in understanding the underlying mechanisms of the forecasts. In classical forecasting methods, a fixed model is estimated with statistical evidence on the contribution of several predictors. This provides great explainability and interpretability, since we can exactly indicate which factors matter and to which degree. Moreover, the process of model building can easily be explained, e.g. as illustrated by the gifs earlier in this playbook. For machine learning random forests methods exist to show the contribution of these predictors, which provides some explainability, but these are not as clear as conventional statistics. Moreover, the random draws used in this procedure may lead to slightly different outcomes, which hampers interpretability. Interpreting the underlying model is difficult anyway in a random forest model.
Fourth round
If there are possible societal issues related to the digital twin, it is possible to add a new set of cards, which present societal issues that digital twins might raise. In this round participants are asked to reflect on the value of the various scenarios for society as a whole. It invites participants to reflect as citizens on the future with the digital twin.
Mixed-effects random forest for clustered data.
Hajjem, A., Bellavance, F., & Larocque, D. (2014). Mixed-effects random forest for clustered data. Journal of Statistical Computation and Simulation, 84(6), 1313-1328.