The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Default user image.

Carsten Peterson

Expert

Default user image.

Explorations of the mean field theory learning algorithm

Author

  • Carsten Peterson
  • Eric Hartman

Summary, in English

The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.

Publishing year

1989

Language

English

Pages

475-494

Publication/Series

Neural Networks

Volume

2

Issue

6

Document type

Journal article

Publisher

Elsevier

Topic

  • Other Physics Topics

Keywords

  • Bidirectional
  • Content addressable memory
  • Generalization
  • Learning algorithm
  • Mean field theory
  • Neural network

Status

Published

ISBN/ISSN/Other

  • ISSN: 0893-6080