Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!cam-news-feed3.bbnplanet.com!cam-news-hub1.bbnplanet.com!news.bbnplanet.com!news.mathworks.com!newsgate.duke.edu!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Help: Training sets for Benchmarking Neural Networks
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <E6HIHI.2xH@unx.sas.com>
Date: Mon, 3 Mar 1997 20:34:30 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <Pine.SUN.3.91.970228132640.11700C-100000@loki.brunel.ac.uk>
Organization: SAS Institute Inc.
Lines: 20


In article <Pine.SUN.3.91.970228132640.11700C-100000@loki.brunel.ac.uk>, Ali <cs94aaw@brunel.ac.uk> writes:
|> ...
|> I'm currently doing my final year undergraduate project in which I wish 
|> to compare the performance of backpropagation, a modified genetic 
|> algorithm, and random optimisation in terms of convergence and 
|> generalisation ability in fixed feedforward neural networks. 

Why not include some _good_ training algorithms in your comparison?  See
"What is backprop?" and "What are conjugate gradients,
Levenberg-Marquardt, etc.?" in the Neural Network FAQ, part 2 of 7:
Learning, at ftp://ftp.sas.com/pub/neural/FAQ2.html

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
 *** Do not send me unsolicited commercial or political email! ***

