![Default user image.](/themes/custom/lu_theme/images/default_images/usericon.png)
Carsten Peterson
Expert
![Default user image.](/themes/custom/lu_theme/images/default_images/usericon.png)
Explorations of the mean field theory learning algorithm
Author
Summary, in English
The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.
Publishing year
1989
Language
English
Pages
475-494
Publication/Series
Neural Networks
Volume
2
Issue
6
Document type
Journal article
Publisher
Elsevier
Topic
- Other Physics Topics
Keywords
- Bidirectional
- Content addressable memory
- Generalization
- Learning algorithm
- Mean field theory
- Neural network
Status
Published
ISBN/ISSN/Other
- ISSN: 0893-6080