ws-in-west.sprintlink.net!news.sprintlink.net!Sprint!199.72.1.10!newsfeed!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!hotellng.unx.sas.com!saswss
Subject: comp.ai.neural-nets FAQ, Part 7 of 7: Hardware
Date: Fri, 29 Aug 1997 03:00:35 GMT

Archive-name: ai-faq/neural-nets/part7
URL: ftp://ftp.sas.com/pub/neural/FAQ7.html
Maintainer: saswss@unx.sas.com (Warren S. Sarle)

Copyright 1997 by Warren S. Sarle, Cary, NC, USA. Answers provided by other
authors as cited below are copyrighted by those authors, who by submitting
the answers for the FAQ give permission for the answer to be reproduced as
part of the FAQ in any of the ways specified in part 1 of the FAQ. 

This is part 7 (of 7) of a monthly posting to the Usenet newsgroup
comp.ai.neural-nets. See the part 1 of this posting for full information
what it is all about.

========== Questions ========== 
********************************

Part 1: Introduction
Part 2: Learning
Part 3: Generalization
Part 4: Books, data, etc.
Part 5: Free software
Part 6: Commercial software
Part 7: Hardware, etc.

   Neural Network hardware?
   How to learn an inverse of a function?
   How to get invariant recognition of images under translation, rotation,
   etc.?
   Unanswered FAQs

------------------------------------------------------------------------

Subject: Neural Network hardware?
=================================

Thomas Lindblad notes on 96-12-30: 

   The reactive tabu search algorithm has been implemented by the
   Italians, in Trento. ISA and VME and soon PCI boards are available.
   We tested the system with the IRIS and SATIMAGE data and it did
   better than most other chips. 

   The Neuroclassifier is available from Holland still and is also the
   fastest nnw chip or a transient time less than 100 ns. 

   JPL is making another chip, ARL in WDC is making another, so there
   are a few things going on ... 

Overview articles: 

 o Ienne, Paolo and Kuhn, Gary (1995), "Digital Systems for Neural
   Networks", in Papamichalis, P. and Kerwin, R., eds., Digital Signal
   Processing Technology, Critical Reviews Series CR57 Orlando, FL: SPIE
   Optical Engineering, pp 314-45, 
   ftp://mantraftp.epfl.ch/mantra/ienne.spie95.A4.ps.gz or 
   ftp://mantraftp.epfl.ch/mantra/ienne.spie95.US.ps.gz 
 o ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/neurhard.ps (1995) 
 o ftp://ftp.urc.tue.nl/pub/neural/hardware_general.ps.gz (1993) 

Various NN HW information can be found in the Web site 
http://www1.cern.ch/NeuralNets/nnwInHepHard.html (from people who really use
such stuff!). Several applications are described in 
http://www1.cern.ch/NeuralNets/nnwInHepExpt.html 

Further WWW pointers to NN Hardware:
http://msia02.msi.se/~lindsey/nnwAtm.html

Here is a short list of companies: 

1. HNC, INC.
++++++++++++

     HNC Inc.
     5930 Cornerstone Court West
     San Diego, CA 92121-3728

     619-546-8877  Phone
     619-452-6524  Fax
     HNC markets:
      Database Mining Workstation (DMW), a PC based system that
      builds models of relationships and patterns in data. AND
      The SIMD Numerical Array Processor (SNAP). It is an attached
      parallel array processor in a VME chassis with between 16 and 64 parallel
      floating point processors. It provides between 640 MFLOPS and 2.56 GFLOPS
      for neural network and signal processing applications.  A Sun SPARCstation
      serves as the host.  The SNAP won the IEEE 1993 Gordon Bell Prize for best
      price/performance for supercomputer class systems.

2. SAIC (Sience Application International Corporation)
++++++++++++++++++++++++++++++++++++++++++++++++++++++

      10260 Campus Point Drive
      MS 71, San Diego
      CA 92121
      (619) 546 6148
      Fax: (619) 546 6736

3. Micro Devices
++++++++++++++++

      30 Skyline Drive
      Lake Mary
      FL 32746-6201
      (407) 333-4379
      MicroDevices makes   MD1220 - 'Neural Bit Slice'
      Each of the products mentioned sofar have very different usages.
      Although this sounds similar to Intel's product, the
      architectures are not.

4. Intel Corp
+++++++++++++

      2250 Mission College Blvd
      Santa Clara, Ca 95052-8125
      Attn ETANN, Mail Stop SC9-40
      (408) 765-9235
      Intel was making an experimental chip (which is no longer produced):
      80170NW - Electrically trainable Analog Neural Network (ETANN)
      It has 64 'neurons' on it - almost fully internally connectted
      and the chip can be put in an hierarchial architecture to do 2 Billion
      interconnects per second.
      Support software by
        California Scientific Software
        10141 Evening Star Dr #6
        Grass Valley, CA 95945-9051
        (916) 477-7481
      Their product is called 'BrainMaker'.

5. NeuralWare, Inc
++++++++++++++++++

      Penn Center West
      Bldg IV Suite 227
      Pittsburgh
      PA 15276
      They only sell software/simulator but for many platforms.

6. Tubb Research Limited
++++++++++++++++++++++++

      7a Lavant Street
      Peterfield
      Hampshire
      GU32 2EL
      United Kingdom
      Tel: +44 730 60256

7. Adaptive Solutions Inc
+++++++++++++++++++++++++

      1400 NW Compton Drive
      Suite 340
      Beaverton, OR 97006
      U. S. A.
      Tel: 503-690-1236;   FAX: 503-690-1249

8. NeuroDynamX, Inc.
++++++++++++++++++++

      P.O. Box 14
      Marion, OH  43301-0014
      Voice (614) 387-5074  Fax: (614) 382-4533
      Internet:  jwrogers@on-ramp.net

      InfoTech Software Engineering purchased the software and
      trademarks from NeuroDynamX, Inc. and, using the NeuroDynamX tradename,
      continues to publish the DynaMind, DynaMind Developer Pro and iDynaMind
      software packages. 

9. IC Tech, Inc.
++++++++++++++++

    *  NRAM (Neural Retrieve Associative Memory)   is available as a stand-alone chip or
       a functional unit which can be embedded inside another chip, e.g., a digital signal
       processor or SRAM.  Data storage procedure is compatible with conventional 
       memories, i.e., a single presentation of the data is sufficient. Set-up and hold 
       times are comparable with existing devices of similar technology dimensions. 
       Data retrieval capability is where NRAM excels. When addressed, this content 
       addressable memory produces the one previously-stored pattern that matches the 
       presented data sequence most closely.  If no matching pattern is found, no data is 
       returned.   This set of error-correction and smart retrieval tasks are accomplished without 
       comparators, processors, or other external logic.  Number of data bits is adjustable.
       Optimized circuitry consumes little power.  Many applications of NRAM exist in rapid 
       search of large databases, template matching, and associative recall.

     *  NRAM (neural retrieve associate memory) development environment includes PC card
        with on board NRAM chip and C++ source code to address the device.

       
        Contact:

        IC Tech, Inc.
        2157 University Park Dr.
        Okemos, MI 48864
        (517) 349-4544
        (517) 349-2559  (FAX)
        http://www.ic-tech.com
        ictech@ic-tech.com

And here is an incomplete overview of known Neural Computers with their
newest known reference.

\subsection*{Digital}
\subsubsection{Special Computers}

{\bf AAP-2}
Takumi Watanabe, Yoshi Sugiyama, Toshio Kondo, and Yoshihiro Kitamura.
Neural network simulation on a massively parallel cellular array
processor: AAP-2.
In International Joint Conference on Neural Networks, 1989.

{\bf ANNA}
B.E.Boser, E.Sackinger, J.Bromley, Y.leChun, and L.D.Jackel.\\
Hardware Requirements for Neural Network Pattern Classifiers.\\
In {\it IEEE Micro}, 12(1), pages 32-40, February 1992.

{\bf Analog Neural Computer}
Paul Mueller et al.
Design and performance of a prototype analog neural computer.
In Neurocomputing, 4(6):311-323, 1992.

{\bf APx -- Array Processor Accelerator}\\
F.Pazienti.\\
Neural networks simulation with array processors.
In {\it Advanced Computer Technology, Reliable Systems and Applications;
Proceedings of the 5th Annual Computer Conference}, pages 547-551.
IEEE Comput. Soc. Press, May 1991. ISBN: 0-8186-2141-9.

{\bf ASP -- Associative String Processor}\\
A.Krikelis.\\
A novel massively associative processing architecture for the
implementation artificial neural networks.\\
In {\it 1991 International Conference on Acoustics, Speech and
Signal Processing}, volume 2, pages 1057-1060. IEEE Comput. Soc. Press,
May 1991.

{\bf BSP400}
Jan N.H. Heemskerk, Jacob M.J. Murre, Jaap Hoekstra, Leon H.J.G.
Kemna, and Patrick T.W. Hudson.
The bsp400: A modular neurocomputer assembled from 400 low-cost
microprocessors.
In International Conference on Artificial Neural Networks. Elsevier
Science, 1991.

{\bf BLAST}\\
J.G.Elias, M.D.Fisher, and C.M.Monemi.\\
A multiprocessor machine for large-scale neural network simulation.
In {\it IJCNN91-Seattle: International Joint Conference on Neural
Networks}, volume 1, pages 469-474. IEEE Comput. Soc. Press, July 1991.
ISBN: 0-7883-0164-1.

{\bf CNAPS Neurocomputer}\\
H.McCartor\\
Back Propagation Implementation on the Adaptive Solutions CNAPS
Neurocomputer.\\
In {\it Advances in Neural Information Processing Systems}, 3, 1991.

{\bf GENES~IV and MANTRA~I}\\
Paolo Ienne and  Marc A. Viredaz\\
{GENES~IV}: A Bit-Serial Processing Element for a Multi-Model
   Neural-Network Accelerator\\
Journal of {VLSI} Signal Processing, volume 9, no. 3, pages 257--273, 1995.

{\bf MA16 -- Neural Signal Processor}
U.Ramacher, J.Beichter, and N.Bruls.\\
Architecture of a general-purpose neural signal processor.\\
In {\it IJCNN91-Seattle: International Joint Conference on Neural
Networks}, volume 1, pages 443-446. IEEE Comput. Soc. Press, July 1991.
ISBN: 0-7083-0164-1.

{\bf Mindshape}
Jan N.H. Heemskerk, Jacob M.J. Murre Arend Melissant, Mirko Pelgrom,
and Patrick T.W. Hudson.
Mindshape: a neurocomputer concept based on a fractal architecture.
In International Conference on Artificial Neural Networks. Elsevier
Science, 1992.

{\bf mod 2}
Michael L. Mumford, David K. Andes, and Lynn R. Kern.
The mod 2 neurocomputer system design.
In IEEE Transactions on Neural Networks, 3(3):423-433, 1992.

{\bf NERV}\\
R.Hauser, H.Horner, R. Maenner, and M.Makhaniok.\\
Architectural Considerations for NERV - a General Purpose Neural
Network Simulation System.\\
In {\it Workshop on Parallel Processing: Logic, Organization and
Technology -- WOPPLOT 89}, pages 183-195. Springer Verlag, Mars 1989.
ISBN: 3-5405-5027-5.

{\bf NP -- Neural Processor}\\
D.A.Orrey, D.J.Myers, and J.M.Vincent.\\
A high performance digital processor for implementing large artificial
neural networks.\\
In {\it Proceedings of of the IEEE 1991 Custom Integrated Circuits
Conference}, pages 16.3/1-4. IEEE Comput. Soc. Press, May 1991.
ISBN: 0-7883-0015-7.

{\bf RAP -- Ring Array Processor }\\
N.Morgan, J.Beck, P.Kohn, J.Bilmes, E.Allman, and J.Beer.\\
The ring array processor: A multiprocessing peripheral for connectionist
applications. \\
In {\it Journal of Parallel and Distributed Computing}, pages
248-259, April 1992.

{\bf RENNS -- REconfigurable Neural Networks Server}\\
O.Landsverk, J.Greipsland, J.A.Mathisen, J.G.Solheim, and L.Utne.\\
RENNS - a Reconfigurable Computer System for Simulating Artificial
Neural Network Algorithms.\\
In {\it Parallel and Distributed Computing Systems, Proceedings of the
ISMM 5th International Conference}, pages 251-256. The International
Society for Mini and Microcomputers - ISMM, October 1992.
ISBN: 1-8808-4302-1.

{\bf SMART -- Sparse Matrix Adaptive and Recursive Transforms}\\
P.Bessiere, A.Chams, A.Guerin, J.Herault, C.Jutten, and J.C.Lawson.\\
From Hardware to Software: Designing a ``Neurostation''.\\
In {\it VLSI design of Neural Networks}, pages 311-335, June 1990.

{\bf SNAP -- Scalable Neurocomputer Array Processor}
E.Wojciechowski.\\
SNAP: A parallel processor for implementing real time neural networks.\\
In {\it Proceedings of the IEEE 1991 National Aerospace and Electronics
Conference; NAECON-91}, volume 2, pages 736-742. IEEE Comput.Soc.Press,
May 1991.

{\bf Toroidal Neural Network Processor}\\
S.Jones, K.Sammut, C.Nielsen, and J.Staunstrup.\\
Toroidal Neural Network: Architecture and Processor Granularity
Issues.\\
In {\it VLSI design of Neural Networks}, pages 229-254, June 1990.

{\bf SMART and SuperNode}
P. Bessi`ere, A. Chams, and P. Chol.
MENTAL : A virtual machine approach to artificial neural networks
programming. In NERVES, ESPRIT B.R.A. project no 3049, 1991.


\subsubsection{Standard Computers}

{\bf EMMA-2}\\
R.Battiti, L.M.Briano, R.Cecinati, A.M.Colla, and P.Guido.\\
An application oriented development environment for Neural Net models on
multiprocessor Emma-2.\\
In {\it Silicon Architectures for Neural Nets; Proceedings for the IFIP
WG.10.5 Workshop}, pages 31-43. North Holland, November 1991.
ISBN: 0-4448-9113-7.

{\bf iPSC/860 Hypercube}\\
D.Jackson, and D.Hammerstrom\\
Distributing Back Propagation Networks Over the Intel iPSC/860
Hypercube}\\
In {\it IJCNN91-Seattle: International Joint Conference on Neural
Networks}, volume 1, pages 569-574. IEEE Comput. Soc. Press, July 1991.
ISBN: 0-7083-0164-1.

{\bf SCAP -- Systolic/Cellular Array Processor}\\
Wei-Ling L., V.K.Prasanna, and K.W.Przytula.\\
Algorithmic Mapping of Neural Network Models onto Parallel SIMD
Machines.\\
In {\it IEEE Transactions on Computers}, 40(12), pages 1390-1401,
December 1991. ISSN: 0018-9340.

------------------------------------------------------------------------

Subject: How to learn an inverse of a function? 
================================================

Ordinarily, NNs learn a function Y = f(X), where Y is a vector of
outputs, X is a vector of inputs, and f() is the function to be learned.
Sometimes, however, you may want to learn an inverse of a function f(),
that is, given Y, you want to be able to find an X such that Y = f(X).
In general, there may be many different Xs that satisfy the equation Y =
f(X). 

For example, in robotics (DeMers and Kreutz-Delgado, 1996, 1997), X might
describe the positions of the joints in a robot's arm, while Y would
describe the location of the robot's hand. There are simple formulas to
compute the location of the hand given the positions of the joints, called
the "forward kinematics" problem. But there is no simple formula for the
"inverse kinematics" problem to compute positions of the joints that yield a
given location for the hand. Furthermore, if the arm has several joints,
there will usually be many different positions of the joints that yield the
same location of the hand, so the forward kinematics function is many-to-one
and has no unique inverse. Picking any X such that Y = f(X) is OK if
the only aim is to position the hand at Y. However if the aim is to
generate a series of points to move the hand through an arc this may be
insufficient. In this case the series of Xs need to be in the same "branch"
of the function space. Care must be taken to avoid solutions that yield
inefficient or impossible movements of the arm. 

As another example, consider an industrial process in which X represents
settings of control variables imposed by an operator, and Y represents
measurements of the product of the industrial process. The function Y =
f(X) can be learned by a NN using conventional training methods. But the
goal of the analysis may be to find control settings X that yield a product
with specified measurements Y, in which case an inverse of f(X) is
required. In industrial applications, financial considerations are
important, so not just any setting X that yields the desired result Y may
be acceptable. Perhaps a function can be specified that gives the cost of X
resulting from energy consumption, raw materials, etc., in which case you
would want to find the X that minimizes the cost function while satisfying
the equation Y = f(X). 

The obvious way to try to learn an inverse function is to generate a set of
training data from a given forward function, but designate Y as the input
and X as the output when training the network. Using a least-squares error
function, this approach will fail if f() is many-to-one. The problem is
that for an input Y, the net will not learn any single X such that Y =
f(X), but will instead learn the arithmetic mean of all the Xs in the
training set that satisfy the equation (Bishop, 1995, pp. 207-208). One
solution to this difficulty is to construct a network that learns a mixture
approximation to the conditional distribution of X given Y (Bishop, 1995,
pp. 212-221). However, the mixture method will not work well in general for
an X vector that is more than one-dimensional, such as Y = X_1^2 +
X_2^2, since the number of mixture components required may increase
exponentially with the dimensionality of X. And you are still left with the
problem of extracting a single output vector from the mixture distribution,
which is nontrivial if the mixture components overlap considerably. Another
solution is to use a highly robust error function, such as a redescending
M-estimator, that learns a single mode of the conditional distribution
instead of learning the mean (Huber, 1981; Rohwer and van der Rest 1996).
Additional regularization terms or constraints may be required to persuade
the network to choose appropriately among several modes, and there may be
severe problems with local optima. 

Another approach is to train a network to learn the forward mapping f()
and then numerically invert the function. Finding X such that Y = f(X)
is simply a matter of solving a nonlinear system of equations, for which
many algorithms can be found in the numerical analysis literature (Dennis
and Schnabel 1983). One way to solve nonlinear equations is turn the problem
into an optimization problem by minimizing sum(Y_i-f(X_i))^2. This
method fits in nicely with the usual gradient-descent methods for training
NNs (Kindermann and Linden 1990). Since the nonlinear equations will
generally have multiple solutions, there may be severe problems with local
optima, especially if some solutions are considered more desirable than
others. You can deal with multiple solutions by inventing some objective
function that measures the goodness of different solutions, and optimizing
this objective function under the nonlinear constraint Y = f(X) using
any of numerous algorithms for nonlinear programming (NLP; see Bertsekas,
1995, and other references under "What are conjugate gradients,
Levenberg-Marquardt, etc.?") The power and flexibility of the nonlinear
programming approach are offset by possibly high computational demands. 

If the forward mapping f() is obtained by training a network, there will
generally be some error in the network's outputs. The magnitude of this
error can be difficult to estimate. The process of inverting a network can
propagate this error, so the results should be checked carefully for
validity and numerical stability. Some training methods can produce not just
a point output but also a prediction interval (Bishop, 1995; White, 1992).
You can take advantage of prediction intervals when inverting a network by
using NLP methods. For example, you could try to find an X that minimizes
the width of the prediction interval under the constraint that the equation 
Y = f(X) is satisfied. Or instead of requiring Y = f(X) be satisfied
exactly, you could try to find an X such that the prediction interval is
contained within some specified interval while minimizing some cost
function. 

For more mathematics concerning the inverse-function problem, as well as
some interesting methods involving self-organizing maps, see DeMers and
Kreutz-Delgado (1996, 1997). For NNs that are relatively easy to invert, see
the Adaptive Logic Networks described in the software sections of the FAQ. 

References: 

   Bertsekas, D. P. (1995), Nonlinear Programming, Belmont, MA: Athena
   Scientific. 

   Bishop, C.M. (1995), Neural Networks for Pattern Recognition, Oxford:
   Oxford University Press. 

   DeMers, D., and Kreutz-Delgado, K. (1996), "Canonical Parameterization of
   Excess motor degrees of freedom with self organizing maps", IEEE Trans
   Neural Networks, 7, 43-55. 

   DeMers, D., and Kreutz-Delgado, K. (1997), "Inverse kinematics of
   dextrous manipulators," in Omidvar, O., and van der Smagt, P., (eds.) 
   Neural Systems for Robotics, San Diego: Academic Press, pp. 75-116. 

   Dennis, J.E. and Schnabel, R.B. (1983) Numerical Methods for
   Unconstrained Optimization and Nonlinear Equations, Prentice-Hall 

   Huber, P.J. (1981), Robust Statistics, NY: Wiley. 

   Kindermann, J., and Linden, A. (1990), "Inversion of Neural Networks by
   Gradient Descent," Parallel Computing, 14, 277-286,
   ftp://icsi.Berkeley.EDU/pub/ai/linden/KindermannLinden.IEEE92.ps.Z 

   Rohwer, R., and van der Rest, J.C. (1996), "Minimum description length,
   regularization, and multimodal data," Neural Computation, 8, 595-609. 

   White, H. (1992), "Nonparametric Estimation of Conditional Quantiles
   Using Neural Networks," in Page, C. and Le Page, R. (eds.), Proceedings
   of the 23rd Sympsium on the Interface: Computing Science and Statistics,
   Alexandria, VA: American Statistical Association, pp. 190-199. 

------------------------------------------------------------------------

Subject: How to get invariant recognition of images under
=========================================================
translation, rotation, etc.?
============================

See: 

   Bishop, C.M. (1995), Neural Networks for Pattern Recognition, Oxford:
   Oxford University Press, section 8.7. 

   Masters, T. (1994), Signal and Image Processing with Neural Networks: A
   C++ Sourcebook, NY: Wiley. 

   Soucek, B., and The IRIS Group (1992), Fast Learning and Invariant Object
   Recognition, NY: Wiley. 

------------------------------------------------------------------------

Subject: Unanswered FAQs
========================

If you have good answers for any of these questions, please send them to the
FAQ maintainer at saswss@unx.sas.com. 

 o How many training cases do I need? 
 o How should I split the data into training and validation sets? 
 o What error functions can be used? 
 o What are some good constructive training algorithms? 
 o How can I invert a network? 
 o How can I select important input variables? 
 o How to handle missing data? 
 o Should NNs be used in safety-critical applications? 
 o My net won't learn! What should I do??? 
 o My net won't generalize! What should I do??? 

------------------------------------------------------------------------

That's all folks (End of the Neural Network FAQ).

Acknowledgements: Thanks to all the people who helped to get the stuff
                  above into the posting. I cannot name them all, because
                  I would make far too many errors then. :->

                  No?  Not good?  You want individual credit?
                  OK, OK. I'll try to name them all. But: no guarantee....

  THANKS FOR HELP TO:
(in alphabetical order of email adresses, I hope)

 o Steve Ward <71561.2370@CompuServe.COM> 
 o Allen Bonde <ab04@harvey.gte.com> 
 o Accel Infotech Spore Pte Ltd <accel@solomon.technet.sg> 
 o Ales Krajnc <akrajnc@fagg.uni-lj.si> 
 o Alexander Linden <al@jargon.gmd.de> 
 o Matthew David Aldous <aldous@mundil.cs.mu.OZ.AU> 
 o S.Taimi Ames <ames@reed.edu> 
 o Axel Mulder <amulder@move.kines.sfu.ca> 
 o anderson@atc.boeing.com 
 o Andy Gillanders <andy@grace.demon.co.uk> 
 o Davide Anguita <anguita@ICSI.Berkeley.EDU> 
 o Avraam Pouliakis <apou@leon.nrcps.ariadne-t.gr> 
 o Kim L. Blackwell <avrama@helix.nih.gov> 
 o Mohammad Bahrami <bahrami@cse.unsw.edu.au> 
 o Paul Bakker <bakker@cs.uq.oz.au> 
 o Stefan Bergdoll <bergdoll@zxd.basf-ag.de> 
 o Jamshed Bharucha <bharucha@casbs.Stanford.EDU> 
 o Carl M. Cook <biocomp@biocomp.seanet.com> 
 o Yijun Cai <caiy@mercury.cs.uregina.ca> 
 o L. Leon Campbell <campbell@brahms.udel.edu> 
 o Cindy Hitchcock <cindyh@vnet.ibm.com> 
 o Clare G. Gallagher <clare@mikuni2.mikuni.com> 
 o Craig Watson <craig@magi.ncsl.nist.gov> 
 o Yaron Danon <danony@goya.its.rpi.edu> 
 o David Ewing <dave@ndx.com> 
 o David DeMers <demers@cs.ucsd.edu> 
 o Denni Rognvaldsson <denni@thep.lu.se> 
 o Duane Highley <dhighley@ozarks.sgcl.lib.mo.us> 
 o Dick.Keene@Central.Sun.COM 
 o DJ Meyer <djm@partek.com> 
 o Donald Tveter <drt@mcs.com> 
 o Daniel Tauritz <dtauritz@wi.leidenuniv.nl> 
 o Wlodzislaw Duch <duch@phys.uni.torun.pl> 
 o E. Robert Tisdale <edwin@flamingo.cs.ucla.edu> 
 o Athanasios Episcopos <episcopo@fire.camp.clarkson.edu> 
 o Frank Schnorrenberg <fs0997@easttexas.tamu.edu> 
 o Gary Lawrence Murphy <garym@maya.isis.org> 
 o gaudiano@park.bu.edu 
 o Lee Giles <giles@research.nj.nec.com> 
 o Glen Clark <opto!glen@gatech.edu> 
 o Phil Goodman <goodman@unr.edu> 
 o guy@minster.york.ac.uk 
 o Horace A. Vallas, Jr. <hav@neosoft.com> 
 o Gregory E. Heath <heath@ll.mit.edu> 
 o Joerg Heitkoetter <heitkoet@lusty.informatik.uni-dortmund.de> 
 o Ralf Hohenstein <hohenst@math.uni-muenster.de> 
 o Ian Cresswell <icressw@leopold.win-uk.net> 
 o Gamze Erten <ictech@mcimail.com> 
 o Ed Rosenfeld <IER@aol.com> 
 o Franco Insana <INSANA@asri.edu> 
 o Janne Sinkkonen <janne@iki.fi> 
 o Javier Blasco-Alberto <jblasco@ideafix.cps.unizar.es> 
 o Jean-Denis Muller <jdmuller@vnet.ibm.com> 
 o Jeff Harpster <uu0979!jeff@uu9.psi.com> 
 o Jonathan Kamens <jik@MIT.Edu> 
 o J.J. Merelo <jmerelo@kal-el.ugr.es> 
 o Dr. Jacek Zurada <jmzura02@starbase.spd.louisville.edu> 
 o Jon Gunnar Solheim <jon@kongle.idt.unit.no> 
 o Josef Nelissen <jonas@beor.informatik.rwth-aachen.de> 
 o Joey Rogers <jrogers@buster.eng.ua.edu> 
 o Subhash Kak <kak@gate.ee.lsu.edu> 
 o Ken Karnofsky <karnofsky@mathworks.com> 
 o Kjetil.Noervaag@idt.unit.no 
 o Luke Koops <koops@gaul.csd.uwo.ca> 
 o Kurt Hornik <Kurt.Hornik@tuwien.ac.at> 
 o Thomas Lindblad <lindblad@kth.se> 
 o Clark Lindsey <lindsey@particle.kth.se> 
 o Lloyd Lubet <llubet@rt66.com> 
 o William Mackeown <mackeown@compsci.bristol.ac.uk> 
 o Maria Dolores Soriano Lopez <maria@vaire.imib.rwth-aachen.de> 
 o Mark Plumbley <mark@dcs.kcl.ac.uk> 
 o Peter Marvit <marvit@cattell.psych.upenn.edu> 
 o masud@worldbank.org 
 o Miguel A. Carreira-Perpinan<mcarreir@moises.ls.fi.upm.es> 
 o Yoshiro Miyata <miyata@sccs.chukyo-u.ac.jp> 
 o Madhav Moganti <mmogati@cs.umr.edu> 
 o Jyrki Alakuijala <more@ee.oulu.fi> 
 o Jean-Denis Muller <muller@bruyeres.cea.fr> 
 o Michael Reiss <m.reiss@kcl.ac.uk> 
 o mrs@kithrup.com 
 o Maciek Sitnik <msitnik@plearn.edu.pl> 
 o R. Steven Rainwater <ncc@ncc.jvnc.net> 
 o Nigel Dodd <nd@neural.win-uk.net> 
 o Barry Dunmall <neural@nts.sonnet.co.uk> 
 o Paolo Ienne <Paolo.Ienne@di.epfl.ch> 
 o Paul Keller <pe_keller@ccmail.pnl.gov> 
 o Peter Hamer <P.G.Hamer@nortel.co.uk> 
 o Pierre v.d. Laar <pierre@mbfys.kun.nl> 
 o Michael Plonski <plonski@aero.org> 
 o Lutz Prechelt <prechelt@ira.uka.de> [creator of FAQ] 
 o Richard Andrew Miles Outerbridge <ramo@uvphys.phys.uvic.ca> 
 o Rand Dixon <rdixon@passport.ca> 
 o Robin L. Getz <rgetz@esd.nsc.com> 
 o Richard Cornelius <richc@rsf.atd.ucar.edu> 
 o Rob Cunningham <rkc@xn.ll.mit.edu> 
 o Robert.Kocjancic@IJS.si 
 o Randall C. O'Reilly <ro2m@crab.psy.cmu.edu> 
 o Rutvik Desai <rudesai@cs.indiana.edu> 
 o Robert W. Means <rwmeans@hnc.com> 
 o Stefan Vogt <s_vogt@cis.umassd.edu> 
 o Osamu Saito <saito@nttica.ntt.jp> 
 o Scott Fahlman <sef+@cs.cmu.edu> 
 o <seibert@ll.mit.edu> 
 o Sheryl Cormicle <sherylc@umich.edu> 
 o Ted Stockwell <ted@aps1.spa.umn.edu> 
 o Stephanie Warrick <S.Warrick@cs.ucl.ac.uk> 
 o Serge Waterschoot <swater@minf.vub.ac.be> 
 o Thomas G. Dietterich <tgd@research.cs.orst.edu> 
 o Thomas.Vogel@cl.cam.ac.uk 
 o Ulrich Wendl <uli@unido.informatik.uni-dortmund.de> 
 o M. Verleysen <verleysen@dice.ucl.ac.be> 
 o VestaServ@aol.com 
 o Sherif Hashem <vg197@neutrino.pnl.gov> 
 o Matthew P Wiener <weemba@sagi.wistar.upenn.edu> 
 o Wesley Elsberry <welsberr@orca.tamu.edu> 
 o Dr. Steve G. Romaniuk <ZLXX69A@prodigy.com> 

The FAQ was created in June/July 1991 by Lutz Prechelt; he also maintained
the FAQ until November 1995. Warren Sarle maintains the FAQ since December
1995. 


Bye

  Warren & Lutz

Previous part is part 6. 

Neural network FAQ / Warren S. Sarle, saswss@unx.sas.com

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
* Do not send me unsolicited commercial, political, or religious email *
