Alfaro, Esteban, Matías Gámez, and Noelia García. 2013. adabag: An R Package for Classification with Boosting and Bagging.” Journal of Statistical Software 54 (2): 1–35. http://www.jstatsoft.org/v54/i02/.
Arrieta, Alejandro Barredo, Natalia Dı́az-Rodrı́guez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcı́a, et al. 2020. “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI.” Information Fusion 58: 82–115.
Banerjee, Mousumi, Ying Ding, and Anne-Michelle Noone. 2012. “Identifying Representative Trees from Ensembles.” Statistics in Medicine 31 (15): 1601–16. https://doi.org/10.1002/sim.4492.
Breiman, Leo. 1996a. “Bagging Predictors.” Machine Learning 24 (2): 123–40.
———. 1996b. “Heuristics of Instability and Stabilization in Model Selection.” The Annals of Statistics 24 (6): 2350–83. https://doi.org/10.1214/aos/1032181158.
———. 2001. “Random Forests.” Machine Learning 45 (1): 5–32. https://doi.org/10.1023/A:1010933404324.
Breiman, Leo, Jerome Friedman, Charles J Stone, and Richard A Olshen. 1984. Classification and Regression Trees. Belmont: Wadsworth International Group.
Briand, Bénédicte, Gilles R Ducharme, Vanessa Parache, and Catherine Mercat-Rommens. 2009. “A Similarity Measure to Assess the Stability of Classification Trees.” Computational Statistics & Data Analysis 53 (4): 1208–17. https://doi.org/10.1016/j.csda.2008.10.033.
Chipman, HA, EI George, and RE McCulloh. 1998. “Making Sense of a Forest of Trees.” In Computing Science and Statistics, Proceedings of the 30th Symposium on the Interface., edited by Sanford Weisberg, 84–92. Fairfax, VA: Interface Foundation of North America.
Dua, Dheeru, and Casey Graff. 2017. “UCI Machine Learning Repository.” University of California, Irvine, School of Information; Computer Sciences. http://archive.ics.uci.edu/ml.
Fehrman, Elaine, Awaz K Muhammad, Evgeny M Mirkes, Vincent Egan, and Alexander N Gorban. 2017. “The Five Factor Model of Personality and Evaluation of Drug Consumption Risk.” In Data Science, 231–42. Springer. https://doi.org/10.1037/10140-001.
Freund, Yoav, and Robert E Schapire. 1997. “A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting.” Journal of Computer and System Sciences 55 (1): 119–39.
Hastie, Trevor J.., Robert John Tibshirani, and Jerome H Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer. https://doi.org/10.1007/978-0-387-84858-7.
Hornik, Kurt, Christian Buchta, and Achim Zeileis. 2009. “Open-Source Machine Learning: R Meets Weka.” Computational Statistics 24 (2): 225–32. https://doi.org/10.1007/s00180-008-0119-7.
Hothorn, Torsten, Kurt Hornik, and Achim Zeileis. 2006. “Unbiased Recursive Partitioning: A Conditional Inference Framework.” Journal of Computational and Graphical Statistics 15 (3): 651–74. https://doi.org/10.1198/106186006X133933.
Hothorn, Torsten, and Achim Zeileis. 2015. partykit: A Modular Toolkit for Recursive Partytioning in R.” Journal of Machine Learning Research 16: 3905–9. http://jmlr.org/papers/v16/hothorn15a.html.
Kaufman, Leonard, and Peter J Rousseeuw. 1990. Finding Groups in Data: An Introduction to Cluster Analysis. Hoboken: John Wiley & Sons. https://doi.org/10.1002/9780470316801.
Leibniz, Gottfried Wilhelm. 1764. “Nouveaux Essais Sur l’entendement Humain, Livre Iv, Chap. Xvii.”
Liaw, Andy, and Matthew Wiener. 2002. “Classification and Regression by randomForest.” R News 2 (3): 18–22. https://CRAN.R-project.org/doc/Rnews/.
Lundberg, Scott M, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. “From Local Explanations to Global Understanding with Explainable AI for Trees.” Nature Machine Intelligence 2 (1): 56–67.
Maechler, Martin, Peter Rousseeuw, Anja Struyf, Mia Hubert, and Kurt Hornik. 2019. cluster: Cluster Analysis Basics and Extensions.
McCrae, Robert R, and Paul T Costa. 2004. “A Contemplated Revision of the NEO Five-Factor Inventory.” Personality and Individual Differences 36 (3): 587–96. https://doi.org/10.1016/s0191-8869(03)00118-1.
Patton, Jim H., Matthew S. Stanford, and Ernest S. Baratt. 1995. “Factor Structure of the Barratt Impulsiveness Scale.” Journal of Cinical Psychology 51 (6): 768–74. https://doi.org/10.1002/1097-4679(199511)51:6<768::aid-jclp2270510607>3.0.co;2-1.
Pfeifer, Bastian, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, and Andreas Holzinger. 2022. “Multi-Omics Disease Module Detection with an Explainable Greedy Decision Forest.” Scientific Reports 12 (1): 1–15.
Philipp, Michel, Thomas Rusch, Kurt Hornik, and Carolin Strobl. 2018. “Measuring the Stability of Results from Supervised Statistical Learning.” Journal of Computational and Graphical Statistics 27 (4): 685–700.
Philipp, Michel, Achim Zeileis, and Carolin Strobl. 2016. “A Toolkit for Stability Assessment of Tree-Based Learners.” In Proceedings of COMPSTAT 2016 – 22nd International Conference on Computational Statistics, edited by Ana Colubi, Angela Blanco, and Cristian Gatu, 315–25. The International Statistical Institute/International Association for Statistical Computing.
Ridgeway, Greg. 2007. “Generalized Boosted Models: A Guide to the gbm Package.” Update 1 (1): 2007.
Rousseeuw, Peter J. 1987. “Silhouettes: A Graphical Aid to the Interpretation and Validation of Cluster Analysis.” Journal of Computational and Applied Mathematics 20: 53–65. https://doi.org/10.1016/0377-0427(87)90125-7.
Schubert, Erich, and Peter J Rousseeuw. 2019. “Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms.” In International Conference on Similarity Search and Applications, 171–87. Springer.
Shannon, William D, and David Banks. 1999. “Combining Classification Trees Using MLE.” Statistics in Medicine 18 (6): 727–40. https://doi.org/10.1002/(sici)1097-0258(19990330)18:6<727::aid-sim61>3.3.co;2-u.
Sies, Aniek, and Iven Van Mechelen. 2020. “C443: A Methodology to See a Forest for the Trees.” Journal of Classification 37: 730–53. https://doi.org/10.1007/s00357-019-09350-4.
Skurichina, Marina, and Robert PW Duin. 2002. “Bagging, Boosting and the Random Subspace Method for Linear Classifiers.” Pattern Analysis & Applications 5 (2): 121–35.
Strobl, Carolin, James Malley, and Gerhard Tutz. 2009. “An Introduction to Recursive Partitioning: Rationale, Application, and Characteristics of Classification and Regression Trees, Bagging, and Random Forests.” Psychological Methods 14 (4): 323. https://doi.org/10.1037/a0016973.
Therneau, Terry, Beth Atkinson, and Brian Ripley. 2015. “Package rpart.” Available online: https://CRAN.R-project.org/package=rpart.
Turney, Peter. 1995. “Technical Note: Bias and the Quantification of Stability.” Machine Learning 20 (1): 23–33. https://doi.org/10.1007/bf00993473.
van Buuren, Stef, and Karin Groothuis-Oudshoorn. 2011. mice: Multivariate Imputation by Chained Equations in r.” Journal of Statistical Software 45 (3): 1–67. https://www.jstatsoft.org/v45/i03/.
Wright, Marvin N., and Andreas Ziegler. 2017. ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R.” Journal of Statistical Software 77 (1): 1–17. https://doi.org/10.18637/jss.v077.i01.
Zuckerman, Marvin, D Michael Kuhlman, Jeffrey Joireman, Paul Teta, and Michael Kraft. 1993. “A Comparison of Three Structural Models for Personality: The Big Three, the Big Five, and the Alternative Five.” Journal of Paediatric Personality and Social Psychology 65 (4): 757. https://doi.org/10.1037//0022-3514.65.4.757.