Topological constraints and robustness in liquid state machines

Hananel Hazan, Larry M. Manevitz

Research output: Contribution to journalArticlepeer-review

Abstract

The Liquid State Machine (LSM) is a method of computing with temporal neurons, which can be used amongst other things for classifying intrinsically temporal data directly unlike standard artificial neural networks. It has also been put forward as a natural model of certain kinds of brain functions. There are two results in this paper: (1) We show that the Liquid State Machines as normally defined cannot serve as a natural model for brain function. This is because they are very vulnerable to failures in parts of the model. This result is in contrast to work by Maass et al. which showed that these models are robust to noise in the input data. (2) We show that specifying certain kinds of topological constraints (such as "small world assumption"), which have been claimed are reasonably plausible biologically, can restore robustness in this sense to LSMs.

Original languageEnglish
Pages (from-to)1597-1606
Number of pages10
JournalExpert Systems with Applications
Volume39
Issue number2
DOIs
StatePublished - 1 Feb 2012

Bibliographical note

Funding Information:
We thank the Caesarea Rothschild Institute for support of this research. The first author thanks Prof. Alek Vainstein for support in the form of a research fellowship. A short version of this work was presented in the MICAI-2010 meeting ( Manevitz & Hazan, 2010 ) whom we thank for inviting us to write this extended version. We also thank the Maass laboratory for the public use of their code.

Keywords

  • Liquid State Machine
  • Machine learning
  • Reservoir computing
  • Robustness
  • Small world topology

ASJC Scopus subject areas

  • General Engineering
  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Topological constraints and robustness in liquid state machines'. Together they form a unique fingerprint.

Cite this