Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!lerc.nasa.gov!magnus.acs.ohio-state.edu!math.ohio-state.edu!howland.reston.ans.net!paladin.american.edu!news.ecn.uoknor.edu!munnari.oz.au!cs.mu.OZ.AU!munta.cs.mu.OZ.AU!schuller
From: schuller@munta.cs.mu.OZ.AU (Wayne Paul SCHULLER)
Subject: Re: Unsupervised Hebbian learning
Message-ID: <9517215.24222@mulga.cs.mu.OZ.AU>
Sender: news@cs.mu.OZ.AU (CS-Usenet)
Organization: Computer Science, University of Melbourne, Australia
References: <3rp2ff$b69@osprey.ee.port.ac.uk> <GEORGIOU.95Jun15122529@wiley.csusb.edu>
Date: Wed, 21 Jun 1995 05:21:06 GMT
Lines: 21

georgiou@wiley.csusb.edu (George M. Georgiou) writes:

>"FN" == Frauke Nonnenmacher <fn@ee.port.ac.uk> writes:

>FN> The algorithm I'm using is:

>FN> Oi = sum ( Wij * Ij)   (Oi = ith output neuron, Ij = jth input neuron)

>FN> Wij = Wij + L * Oi * (Ij - sum[k=0->N] (Ok * Wkj))  (N = No. of output neurons)

>The update rule you are using is wrong. It  should be this instead:

>Wij = Wij + L * Oi (Ij - Oi * Wij)

Hangon, according to Zurada "Intro to Artificial Neu. Sys", it should be:

Wij = W ij + L * OiIj (I'm assuming L is a constant).

Is Zurada wrong? Am I wrong?

wayne.
