Abstract
We consider the task of discovering functional dependencies in data for target attributes of interest. To solve it, we have to answer two questions: How do we quantify the dependency in a model-agnostic and interpretable way as well as reliably against sample size and dimensionality biases? How can we efficiently discover the exact or α-approximate top-k dependencies? We address the first question by adopting information-theoretic notions. Specifically, we consider the mutual information score, for which we propose a reliable estimator that enables robust optimization in high-dimensional data. To address the second question, we then systematically explore the algorithmic implications of using this measure for optimization. We show the problem is NP-hard and justify worst-case exponential-time as well as heuristic search methods. We propose two bounding functions for the estimator, which we use as pruning criteria in branch-and-bound search to efficiently mine dependencies with approximation guarantees. Empirical evaluation shows that the derived estimator has desirable statistical properties, the bounding functions lead to effective exact and greedy search algorithms, and when combined, qualitative experiments show the framework indeed discovers highly informative dependencies.
Original language | English |
---|---|
Pages (from-to) | 4223-4253 |
Number of pages | 31 |
Journal | Knowledge and Information Systems |
Volume | 62 |
Issue number | 11 |
DOIs | |
State | Published - 1 Nov 2020 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2020, The Author(s).
Keywords
- Algorithms
- Approximate functional dependency
- Branch-and-bound
- Information theory
- Knowledge discovery
- Pattern mining
ASJC Scopus subject areas
- Software
- Information Systems
- Human-Computer Interaction
- Hardware and Architecture
- Artificial Intelligence