## Abstract

This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms. In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of minimum spanning tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it. This involves several parts that may be considered contributions in themselves. First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., $$O(\log n)$$O(logn) bits per node), and whose time complexity is $$O(\log ^2 n)$$O(^{log2}n) in synchronous networks, or $$O(\varDelta \log ^3 n)$$O(Δ^{log3}n) time in asynchronous ones, where $$\varDelta $$Δ is the maximum degree of nodes. This answers an open problem posed by Awerbuch et al. (1991). We also show that $$\varOmega (\log n)$$Ω(logn) time is necessary, even in synchronous networks. Another property is that if $$f$$f faults occurred, then, within the required detection time above, they are detected by some node in the $$O(f\log n)$$O(flogn) locality of each of the faults. Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, $$O(n)$$O(n), is significantly better even than that of previous algorithms (the time complexity of previous MST algorithms that used $$\varOmega (\log ^2 n)$$Ω(^{log2}n) memory bits per node was $$O(n^2)$$O(^{n2}), and the time for optimal space algorithms was $$O(n|E|)$$O(n|E|)). Inherited from our proof labeling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within $$O(\log ^2 n)$$O(^{log2}n) time in synchronous networks, or within $$O(\varDelta \log ^3 n)$$O(Δ^{log3}n) time in asynchronous ones, and (2) if $$f$$f faults occurred, then, within the required detection time above, they are detected within the $$O(f\log n)$$O(flogn) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory.

Original language | English |
---|---|

Pages (from-to) | 253-295 |

Number of pages | 43 |

Journal | Distributed Computing |

Volume | 28 |

Issue number | 4 |

DOIs | |

State | Published - 4 Aug 2015 |

Externally published | Yes |

### Bibliographical note

Publisher Copyright:© 2015, Springer-Verlag Berlin Heidelberg.

## Keywords

- Distributed network algorithms
- Distributed property verification
- Fast fault detection
- Local fault detection
- Locality
- Minimum spanning tree
- Proof labels
- Self-stabilization

## ASJC Scopus subject areas

- Theoretical Computer Science
- Hardware and Architecture
- Computer Networks and Communications
- Computational Theory and Mathematics