In this paper, we propose scalable methods for maximizing a regularized submodular function f≜g−ℓ expressed as the difference between a monotone submodular function g and a modular function ℓ. Submodularity is inherently related to the notions of diversity, coverage, and representativeness. In particular, finding the mode (i.e., the most likely configuration) of many popular probabilistic models of diversity, such as determinantal point processes and strongly log-concave distributions, involves maximization of (regularized) submodular functions. Since a regularized function f can potentially take on negative values, the classic theory of submodular maximization, which heavily relies on the non-negativity assumption of submodular functions, is not applicable. To circumvent this challenge, we develop the first one-pass streaming algorithm for maximizing a regularized submodular function subject to a k-cardinality constraint. Furthermore, we develop the first distributed algorithm that returns a solution S in O(1/ϵ) rounds of MapReduce computation. We highlight that our result, even for the unregularized case where the modular term ℓ is zero, improves the memory and communication complexity of the state-of-the-art by a factor of O(1/ϵ) while arguably provides a simpler distributed algorithm and a unifying analysis. We empirically study the performance of our scalable methods on a set of real-life applications, including finding the mode of negatively correlated distributions, vertex cover of social networks, and several data summarization tasks.
|Title of host publication||Proceedings of the 38th International Conference on Machine Learning (ICML)|
|State||Published - 2021|