Message-ID: <331FCBDF.152D@da.leso.epfl.ch>
Date: Fri, 07 Mar 1997 09:03:43 +0100
From: Manuel Bauer <Manuel.Bauer@da.leso.epfl.ch>
X-Mailer: Mozilla 3.0 (Win95; I)
MIME-Version: 1.0
Newsgroups: comp.ai.neural-nets
Subject: Question: fixing parameters after pruning
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: lesopc28.epfl.ch
Lines: 26
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.psc.edu!wink.radian.com!gatech!csulb.edu!hammer.uoregon.edu!arclight.uoregon.edu!europa.clark.net!worldnet.att.net!howland.erols.net!surfnet.nl!news-zh.switch.ch!swidir.switch.ch!epflnews.epfl.ch!lesopc28.epfl.ch

Hi,

I am applying the Levenberg-marquardt learning algorithm to a
feed-forward network and would like to delete some of the connections.
In some articles about pruning techniques, authors propose to set to
zero the deleted parameters and freeze them there.

I guess it means that training is made normally (with all weights) and
wished weights are set to zero after each learning step (method 1). Of
course this is much easier to program than really deleting weights and
making calculation using the resting ones (method 2).

My question is: are both methods completely equivalent?

With method 1,the LM algorithm calculates derivatives and second
derivatives of an error function relatives to weights that are not
defined (because they are set to zero). Does it have any influence on
the calculation of the other weights ?

Does someone have any comments on this, please reply by email, I'll
summarize to the group.

Thank you for the help


Manuel Bauer, Switzerland
