
Neuron Digest   Saturday, 28 Nov 1992
                Volume 10 : Issue 21

Today's Topics:
                     What's a good deal? Any ideas?
                      Commercial software reviews?
                   Manufacturing and neural networks?
                  Introductory works on Connectionism?
                       Job opening at U. Colorado
             NIPS 92 Young Scientists' Network Announcement
     Very Fast Simulated Reannealing code available for beta testing


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: What's a good deal? Any ideas?
From:    Anya L. Simons <VY0112@BINGVAXA.BITNET>
Date:    Wed, 18 Nov 92 14:58:00 -0500


Dear Peter,

    I'm about to apply for a fellowship that pays only for laboratory
"equipment," a total between $1500 and $4000.  I'm definitely getting
fuzzy software, but I'm torn between Hamilton/Avnet (directly blessed by
Motorola), Togai Infralogic (the grandaddy), and Hyperlogic's Cubicalc.
My constraints are that I am controlling a robotic hand attached to a PUMA
arm which will be controlled with a Motorola HC11.  I'm not so interested
in cutsie graphics as I am in several working alternatives that can be
instantly turned into user modifiable C code.  Of course Motorola is my first
choice since the HC11 is theirs, but Togai is JUST coming out with HC11-happy
code (still officially development).  Cubicalc seems to be pretty much cutsie
graphics.  I can write fuzzy and neural code in 10 minutes; I want something
that will react quickly.

FUZZY

Hamilton/Avnet wants $1495 for their modifiable C code development package
that will convert directly for the MC68HC11EVM board.

Togai says their Fuzzy C development package (university) is $1150,
and $500 for an HC11 (as if there weren't a dozen between my two sources).

Hyperlogic offers no student/university discounts.  Their fuzzy CBC-1503
software with run-time source for Turbo C and Windows 3.0 is $2450.

NEURAL

It seems Intel wants $8000 in software (university prices) for
their software to accompany their $470 analog neural chip.  Not in my budget.

Hyperlogic sells an OWL library for C neural programming -- $3000.

Two questions:

* Is any of this really worth it, or should I just write my own C code
  and convert to the proper assembler?  (I'm not an assembly programmer,
  so if the C assembler isn't optimal, oh well, someone will have to convert
  it for me.)

* For the neural, will I get major time savings for an 18 input, 2 or 3
  layers of 5 or 6 units apiece, backprop-style network?

Any clues? I'm about to mail in the forms....

Thanks!

Baby Doc Anya

(P.S.)

My GE Outlook Center won't let me, but if anyone wants to fax out a
request for a free fuzzy car demo from a German Company, their address is:

INFORM GmbH Aachen
Geschaftsbereich Fuzzy Technologien           (the a has two little dots)
Pascalstrabe 23                               (that b is really a beta)
D-5100 Aachen 1

and their fax number is +49 (24 08) 60 90.

It is a fuzzyTECH demo with real time simulation of a fuzzy car on a
5.25" floppy diskette.  Absolutely free, no obligation, and runs on
"MS-DOS-ATs with VGA-Grafics."

All they ask is what kind of application areas you are looking at,
and what your address is to send the floppy to.

BDA



------------------------------

From:    eric@sunlight.llnl.gov (Eric Keto)
Subject: Commercial software reviews?
Date:    Thu, 19 Nov 92 22:38:56 -0800

[[ Editor's Note: The list I published last year is now probably quite
out of date, and was oriented to free/academic software. ANyone wish to
contribute a new updated commerical list? -PM ]]

Where would I find the latest reviews, brief descriptions, or lists of
commercially available neural network products? Whatever would help in
purchasing a program.  Thanks. --  eric@sunlight.llnl.gov


------------------------------

Subject: Manufacturing and neural networks?
From:    A_VENKATACHA@UNHH.UNH.EDU
Date:    Fri, 20 Nov 92 10:35:41 -0500

[[ Editor's Note: ND gets period requests for neural net applications in
various specific areas. There is a conference Dec 7-11 in Cancun Mexico
on technology transfer specifically about application in manufacturing
(accounced in V9 #16).  Is any ND reader going? Could anyone else offer a
starting bibliography?  -PM ]]


Hello:

I am doing research in the area of Neural Network applications in
MANUFACTURING, using backpropogation paradigm.  I would appreciate
if I could get a list of references on this topic.

Please send mail to :  venky@unh.edu

Thanks.

A.R. Venkatachalam
University of New Hampshire


------------------------------

Subject: Introductory works on Connectionism?
From:    MSANDRI%IVRUNIV.bitnet@ICINECA.CINECA.IT
Date:    Fri, 20 Nov 92 17:54:51 -0100

[[ Editor's Note: Although I am probably begind the times, I still
recommend Rumelhart & McClelland's PDP books (MIT Press), though some
folks tout Anderson's Neurocomputing. Is it time for another "beginner's
reading list"? -PM ]]

I am looking for simple introductory readings about connectionism:
1) definition
2) philosophical relevancy and its relations to cognitivism
3) historical origins
Any references will be welcome.

=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=
  ++++  //////  ******  !    !        CARLO VETTORE
   ++   /       **      !    !        UNIVERSITY OF VERONA
   ++   /////   ****    !    !        ISTITUTO DI SCIENZE ECONOMICHE
   ++       /   **       !  !         VIA DELL'ARTIGLIERE 19
  ++++  /////   ******    !!          37129 VERONA -- ITALY
     MSANDRI@IVRUNIV.BITNET           FAX NO. +39-45-8098529
=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=:=


------------------------------

Subject: Job opening at U. Colorado
From:    rroberts@pstar.psy.du.edu
Date:    Tue, 24 Nov 92 12:15:01 -0700

                                                November 24, 1992

Dear Colleague:

     The Department of Psychology at the University of Denver anticipates
tenure-track positions at the Assistant Professor level, to begin in the
fall of 1993, for someone who could conduct a strong research program in
1) cognitive psychology or 2) cognitive neuroscience, especially with a
developmental or clinical orientation.  We are especially interested in
applicants with expertise in connectionist modelling.
  
     The Department already has an established training program in
Cognitive Psychology. Currently, we have both faculty and graduate
students with interests in cognitive neuroscience in all three
areas of our department, Child-Clinical, Cognitive-Experimental,
and Developmental.  The department is currently considering
establishing a new, inter-area graduate training program in
developmental cognitive neuroscience.  A successful applicant for
this position would thus have an important role in helping to build
this new training program.

     If you know of someone appropriate for either the
developmental or cognitive neuroscience position who may be
interested, please have the applicant send his/her material to
Developmental Cognitive Neuroscience Search Committee, Department
of Psychology, 2155 S. Race Street, Denver, CO 80208.  Applicants
should send a vita, three letters of reference, and recent
publications or manuscripts that will allow us to evaluate his/her
scholarly work as well as potential fit within the department.  If
you or a potential applicant would like more information about this
position, please feel free to call me (303) 871-4403 or send an e-
mail message (bpenning@pstar.psy.du.edu).  We very much appreciate
your bringing potential applicants to our attention.

Bruce F. Pennington, Ph.D.
Chair, Developmental Cognitive Neuroscience Search Committee


------------------------------

Subject: NIPS 92 Young Scientists' Network Announcement
From:    "Thomas A. Tucker" <tomy@maxwell.ee.washington.edu>
Date:    Wed, 25 Nov 92 16:03:28 -0800


  
                ANNOUNCING AN INFORMAL NIPS92 PRESENCE
  
                 -- The Young Scientists' Network --
  
  
        The YSN is a mailing list dedicated to the "discussion of issues 
   involving the employment of scientists, especially those just beginning 
   their careers."  YSN started about two years ago when a group of physics 
   PhD's noted a common thread among their difficulties in securing academic 
   and industrial employment -- but since then scientists in other fields 
   have reported similar situations.  Our conclusion has been that there's 
   a glut of scientists, not the shortage that had been predicted for years.
   Consequently, many well-trained persons are finding themselves under- or 
   un-employed, and, contrary to traditional attitudes, this poor employment 
   record is NOT a reflection of individual abilities so much as it is 
   an indictment of the priorities of American science.
   
        The discussions on YSN have identified many contributors
   to this phenomenon: universities producing too many graduates,
   insubstantial R&D investments by US companies, research funds drying
   up, the growing tendency of universities to create non-tenure-track
   positions when facing budget constraints, and a host of less tangible
   but undeniably real interpersonal and political pressures.
  
        We'd like to offer the opportunity to learn about and get
  involved with the Young Scientists' Network next week at NIPS.
  With luck, we'll be able to set a meeting time in Denver by next
  Monday, and this information should be available when you check in.
  
  Our concrete goals:
  
        -- an opportunity to learn more about the aims and means
           of the Young Scientists' Network, with email contact
           information and sample YSN Digests.
  
        -- solicitation of ideas that might be presented in a
           national legislative or administrative forum 
           (Congress or the NSF, for example), to help foster
           economic opportunities for research.
  
        -- compiling a directory of academic and industrial
           organizations which might offer potential employment 
           (the responses we receive, the more complete the directory).
  
        -- compiling a possible roster of sources for funding.
  
        -- compiling a list of graduate programs offering study in
           computational neuroscience or other connectionist fields
           with an emphasis on interdisciplinary work.
  
        -- we hope to make available survey information which will 
           help to document the nature and magnitude of YSN members' 
           concerns.
  
  Of course, we're open to any suggestions you may wish to put forward,
  and we'll be available for casual discussion throughout the conference.
  
  We look forward to seeing you there,
  
  Thomas A. Tucker      (tomy@ee.washington.edu)
  Pamela A. Abshire     (pa11722@medtronic.com)
 


------------------------------

Subject: Very Fast Simulated Reannealing code available for beta testing
From:    Lester Ingber <ingber@alumni.cco.caltech.edu>
Date:    Sun, 22 Nov 92 20:33:57 -0800

           VERY FAST SIMULATED REANNEALING (VFSR) (C)

             Lester Ingber ingber@alumni.caltech.edu
                               and
              Bruce Rosen rosen@ringer.cs.utsa.edu

1.  License and Availability


1.1.  GNU Copyleft License

     This  Very  Fast  Simulated Reannealing (VFSR) code is being
made available under a GNU COPYING-LIB "copyleft" license, and is
owned  jointly  by Lester Ingber and Bruce Rosen[1].  Please read
the copy of this license contained in this directory.

1.2.  NETLIB Electronic Availability of VFSR

     You can obtain our code  from  NETLIB.   This  can  be  done
interactively, or you can obtain it by electronic mail request.

1.2.1.  Interactive

     From your local machine login to research.att.com:
        local% ftp research.att.com
        Name (research.att.com:your_login_name): netlib
        Password: [type in your_login_name or anything]
        ftp> cd opt
        ftp> binary
        ftp> get vfsr.Z
        ftp> quit
After  `uncompress  vfsr.Z'  read  the  header of vfsr for simple
directions on obtaining your source files.  For example, on  most
machines, after `sh vfsr' they will reside in a VFSR directory.

1.2.2.  Electronic Mail Request

     Send the following one-line electronic mail request
        send vfsr from opt
        [For general NETLIB info, just use: send index]
to one of the NETLIB sites:
        netlib@research.att.com (AT&T Bell Labs, NJ, USA)
                [most recent version]
        netlib@ornl.gov         (Oak Ridge Natl Lab, TN, USA)
        netlib@ukc.ac.uk        (U Kent, UK)
        netlib@nac.no           (Oslo, Norway)
        netlib@cs.uow.edu.au    (U Wollongong, NSW, Australia)

2.  Background and Context

     VFSR  was  developed  in  1987 to deal with the necessity of
performing adaptive global optimization on multivariate nonlinear
stochastic  systems[2].   VFSR was recoded and applied to several
complex systems, in combat analysis[3],  finance[4],  and  neuro-
science[5].   A  comparison  has  shown  VFSR to be superior to a
standard genetic algorithm simulation on a suite of standard test
problems[6],  and  VFSR  has  been  examined  in the context of a
review of methods of simulated annealing[7].  A project comparing
standard  Boltzmann  annealing  with "fast" Cauchy annealing with
VFSR has concluded that VFSR is a superior algorithm[8].  A paper
has  indicated how this technique can be enhanced by combining it
with some other powerful algorithms[9].

2.1.  Efficiency Versus Necessity

     VFSR is not necessarily an "efficient" code.   For  example,
if  you know that your cost function to be optimized is something
close to a parabola, then a simple gradient Newton search  method
most  likely  would  be faster than VFSR.  VFSR is believed to be
faster and more robust than other simulated annealing  techniques
for  most  complex problems with multiple local optima; again, be
careful to note that some problems  are  best  treated  by  other
algorithms.   If you do not know much about the structure of your
system, and especially if it has  complex  constraints,  and  you
need  to  search for a global optimum, then we heartily recommend
our VFSR code to you.

2.2.  Outline of Use

     Set up the VFSR interface: Your program  should  be  divided
into two basic modules.  (1) The user calling procedure, contain-
ing the cost function to be minimized (or  its  negative  if  you
require  a  global  maximum),  here  is  contained  in user.c and
user.h.  (2) The VFSR optimization procedure, here  is  contained
in  vfsr.c  and  vfsr.h.   Furthermore, there are some options to
explore in the Makefile.  We assume there will  be  no  confusion
over  the standard uses of the term "parameter" in different con-
texts, e.g., as an element passed by a subroutine or as a  physi-
cal coefficient in a cost function.

     In VFSR/TESTS we have included some user_out files from some
sample runs, containing timed runs  on  a  Sun4c/4.1.3  (SPARC-2)
using    compilers    cc,   acc   and   gcc-2.3.1,   and   on   a
Dec5100/Ultrix-4.2 using compilers cc and gcc-2.2.2.  No  attempt
was  made  to optimize the use of any of these compilers, so that
the runs do not really signify any testing of these compilers  or
architectures;  rather  they  are  meant to be used as a guide to
determine what you might expect on your own machine.

3.  Makefile

     This file was generated using `make doc'.  The Makefile con-
tains  some options for formatting this file differently, includ-
ing the PostScript version README.ps and the text version README.

     Since  complex  problems  by  their  nature  are often quite
unique, it is unlikely that our default parameters are just right
for  your problem.  However, our experience has shown that if you
a priori do not have any reason to determine your own parameters,
then  you might do just fine using our defaults, and we recommend
using them as a first-order guess.  Most of our defaults  can  be
changed  simply  by uncommenting lines in the Makefile.  Remember
to recompile the entire code every time you change  any  options.
Depending  on  how you integrate VFSR into your own user modules,
you may wish to modify this Makefile or  at  least  use  some  of
these options in your own compilation procedures.

     Read  through  all the options in the Makefile.  As the com-
ments therein suggest, it may be necessary to change some of them
on  some  systems.   Here are just a couple of examples you might
consider:

3.1.  SMALL_FLOAT

     For example, on one  convex  running  our  test  problem  in
user.c  the  SMALL_FLOAT  default  was  too  small  and  the code
crashed.  A larger value was found to give reasonable results.

     The reason is that the fat tail  of  VFSR,  associated  with
high  parameter temperatures, is very important for searching the
breadth of the ranges especially in the initial stages of search.
However,  the  parameter temperatures require small values at the
final stages of the search to  converge  to  the  best  solution,
albeit  this is reached very quickly given the exponential sched-
ule proven in the referenced publications to be permissible  with
VFSR.   Note  that  our  test problem in user.c is a particularly
nasty one, with 1E20 local minima and requiring  VFSR  to  search
over  many  orders  of magnitude of the cost function before cor-
rectly finding the global minimum.

     In VFSR/TESTS We  have  included  vfsr_out  files  comparing
results   using   SMALL_FLOAT=1.0E-16,  SMALL_FLOAT=1.0E-18  (the
default),  and  SMALL_FLOAT=1.0E-20.   Although  the  same  final
results were achieved, the intermediate calculations differ some-
what.

3.2.  HAVE_ANSI

     As another example, setting HAVE_ANSI=FALSE will permit  you
to  use  an older K&R C compiler.  This option can be used if you
do  not  have  an   ANSI   compiler,   overriding   the   default
HAVE_ANSI=TRUE.

4.  User Module

     We  have  set  up this module as user.c and user.h.  You may
wish to combine them into one file, or you may wish  to  use  our
VFSR  module  as  one component of a library required for a large
project.

4.1.  int main(int argc, char **argv)

     In main, set up your initializations and calling  statements
to vfsr.  In the files user.c and user.h, we have provided a sam-
ple program, as well as a sample cost function  for  your  conve-
nience.   If you do not intend to pass parameters into main, then
you can just declare it as main() without the argc and argv argu-
ments.

4.2.  void user_initialize_parameters()

     Before calling vfsr, the user must allocate storage and ini-
tialize some of the passed parameters.   A  sample  procedure  is
provided  as a template.  In this procedure the user should allo-
cate storage for the passed arrays and  define  the  minimum  and
maximum  values.   Below, we detail all the parameters which must
be initialized.  If your arrays are of size 1, still use them  as
arrays as described in user.c.

4.3.  double user_cost_function(double *x, int *valid_flag)

     You  can  give any name to user_cost_function as long as you
pass this name to vfsr.  x (or whatever name you pass to vfsr) is
an array of doubles representing a set of parameters to evaluate,
and valid_flag (or whatever name you pass to vfsr) is the address
of  an integer.  In user_cost_function, *valid_flag should be set
to FALSE (0) if the parameters violate a set of user defined con-
straints  (e.g.,  as  defined by a set of boundary conditions) or
TRUE  (1)  if  the  parameters  represent  a  valid  state.    If
*valid_flag is FALSE, no acceptance test will be attempted, and a
new set of trial parameters  will  be  generated.   The  function
returns a real value which VFSR will minimize.

4.4.  double user_random_generator()

     A  random number generator function must be passed next.  It
may be as simple as one of  the  UNIX  random  number  generators
(e.g.  drand48),  or  may be user defined, but it should return a
real value within [0,1) and not take  any  parameters.   We  have
provided  a good random number generator, randflt, and its auxil-
iary routines with the code in the file user module.

4.5.  void initialize_rng()

     Most random number generators should be "warmed-up" by call-
ing a set of dummy random numbers.

4.6.  void print_time(char *message)

     As  a convenience, we have included this subroutine, and its
auxiliary routine aux_print_time, to keep track of the time spent
during  optimization.   It  takes  as its only parameter a string
which  will  be  printed.   We   have   given   an   example   in
user_cost_function  to  illustrate  how  print_time may be called
periodically   every   set   number   of   calls   by    defining
PRINT_FREQUENCY in user.h.

4.7.
vfsr(
     user_cost_function,
     user_random_generator,
     number_parameters,
     parameter_type,
     parameter_initial_final,
     final_cost,
     parameter_minimum,
     parameter_maximum,
     tangents,
     curvature);

     This is the form of the call to vfsr from user.c.

4.8.
void vfsr(
     double (*user_cost_function) (),
     double (*user_random_generator) (),
     int number_parameters,
     int *parameter_type,
     double *parameter_initial_final,
     double final_cost,
     double *parameter_minimum,
     double *parameter_maximum,
     double *tangents,
     double *curvature)

     This is how vfsr is defined in the VFSR module, contained in
vfsr.c and vfsr.h.  Each parameter is described below as it  must
be passed to this module from the user module.

4.8.1.  double (*user_cost_function) ()

     The  parameter (*user_cost_function*) () is a pointer to the
cost function that you defined in your user module.

4.8.2.  double (*user_random_generator) ()

     As discussed above, a pointer to the random number generator
function, defined in the user module, must be passed next.

4.8.3.  int number_parameters

     An  integer containing the dimensionality of the state space
is passed next.  Each of the arrays that follow are to be of  the
size number_parameters.

4.8.4.  int *parameter_type

     This  integer  array  is  passed next.  Each element of this
array (each flag) is either REAL_TYPE (0) (indicating the parame-
ter  is a real value) or INTEGER_TYPE (1) (indicating the parame-
ter can take on only integer values).

4.8.5.  double *parameter_initial_final

     Next, an array of doubles is passed.  Initially, this  array
holds  the  set  of  starting parameters which should satisfy any
constraints or boundary conditions. Upon  return  from  the  VFSR
procedure,  the  array  will  contain  the best set of parameters
found by vfsr to minimize the user's cost  function.   Experience
shows  that  any guesses within the acceptable ranges should suf-
fice, since initially the system is at high annealing temperature
and VFSR samples the breadth of the ranges.

4.8.6.  double final_cost

     This  double should be defined in the calling program.  Upon
return from the vfsr call, it will  be  the  minimum  cost  value
found by vfsr.

4.8.7.  double *parameter_minimum


4.8.8.  double *parameter_maximum

     These  two  arrays  of doubles should also be passed.  Since
VFSR works only on bounded search  spaces,  these  arrays  should
contain the minimum and maximum values each parameter can attain.
If you aren't sure, try a factor of 10 or 100 times  any  reason-
able  values.   The  exponential  temperature  annealing schedule
should quickly sharpen the search  down  to  the  most  important
region.

4.8.9.  double *tangents


4.8.10.  double *curvature

     These  two  arrays  of  doubles  should  be passed last.  On
return from vfsr, for real parameters, they contain the first and
second  derivatives  of  the  cost  function  with respect to its
parameters.  These can be useful for  determining  the  value  of
your  fit.  In this implementation of VFSR, the tangents are used
to determine the relative reannealing among parameters.

5.  Bug Reports

     While we do not have time to help you solve your own  appli-
cations,  we  do  want  VFSR  to be helpful to a large community.
Therefore, we welcome your bug reports and constructive critiques
regarding our code.  "Flames" will be rapidly quenched.


References

1.   L.  Ingber  and  B. Rosen, "vfsr," Very Fast Simulated Rean-
     nealing (VFSR) Source Code, NETLIB Electronic  Ftp  Archive,
     netlib@research.att.com (1992).

2.   L.  Ingber,  "Very fast simulated re-annealing," Mathl. Com-
     put. Modelling, 8, 12, pp. 967-973 (1989).

3.   L. Ingber, H. Fujio, and M.F. Wehner, "Mathematical compari-
     son of combat computer models to exercise data," Mathl. Com-
     put. Modelling, 1, 15, pp. 65-90 (1991).

4.   L. Ingber, "Statistical mechanical aids to calculating  term
     structure  models,"  Phys.  Rev.  A,  12,  42, pp. 7057-7064
     (1990).

5.   L. Ingber, "Statistical mechanics  of  neocortical  interac-
     tions:  A  scaling  paradigm applied to electroencephalogra-
     phy," Phys. Rev. A, 6, 44, pp. 4017-4060 (1991).

6.   L. Ingber and B. Rosen, "Genetic algorithms  and  very  fast
     simulated  reannealing:  A  comparison," Mathl. Comput. Mod-
     elling, 11, 16, pp. 87-100 (1992).

7.   L. Ingber, "Simulated annealing:  Practice  versus  theory,"
     Statistics Comput., p. (to be published) (1993).

8.   B. Rosen, "Function optimization based on advanced simulated
     annealing," Report, University of  Texas,  San  Antonio,  TX
     (1992).

9.   L. Ingber, "Generic mesoscopic neural networks based on sta-
     tistical mechanics of neocortical interactions," Phys.  Rev.
     A, 4, 45, pp. R2183-R2186 (1992).

[*]  Some  (p)reprints  can  be  obtained  via anonymous ftp from
     ftp.umiacs.umd.edu [128.8.120.23] in the  pub/ingber  direc-
     tory.


  |  Prof. Lester Ingber               ingber@alumni.caltech.edu  #
  |  P.O. Box 857                                                 #
  |  McLean, VA 22101       703-848-1859 = [10ATT]0-700-L-INGBER  #


------------------------------

End of Neuron Digest [Volume 10 Issue 21]
*****************************************

