Abstract
Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models.
Original language | English |
---|---|
Title of host publication | 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS) |
Publisher | IEEE |
ISBN (Electronic) | 978-1-7281-2519-0 |
DOIs | |
Publication status | Published - 31 Oct 2019 |
Event | 39th IEEE International Conference on Distributed Computing Systems 2019 - University of Texas, Dallas, United States Duration: 7 Jul 2019 → 9 Jul 2019 Conference number: 39 https://theory.utdallas.edu/ICDCS2019/ |
Conference
Conference | 39th IEEE International Conference on Distributed Computing Systems 2019 |
---|---|
Abbreviated title | ISDCS 2019 |
Country/Territory | United States |
City | Dallas |
Period | 7/07/19 → 9/07/19 |
Internet address |