        A Spreadsheet Implementation of a Primitive Neural Network
                          Thomas and Dafna Yee

     Neural networks refer to a class of computer models which seek to 
emulate the way in which the brain stores memories.  Unlike other models of
brain function, neural network models assume that concepts are not stored at
discrete locations in the brain, but rather are distributed around the nodes
of the system. For this reason, neural networks are also referred to as 
"parallel distributed" or "connectionist" models of brain function.

     Neural networks have achieved an impressive degree of success in various
practical applications such as stock market analysis, manufacturing process 
control, speech synthesis, and handwriting recognition.  They have an advantage
over other approaches to artificial intelligence (example: "expert systems") in
that human instructors are not required to be able to define the precise rules
of inference by which they operate.  Neural networks can be trained in pattern
recognition tasks even when the recognition criteria are poorly defined.

     So how do neural networks work?

     It is impossible to understand the workings of a neural network by
simply reading a textbook:  one needs actual experience with a computer model.
To teach about neural networks, there are a number of sophisticated neural 
network demonstrators on the shareware market. Unfortunately, we've found that
their fancy user interfaces and automatic operation tend to inhibit one's
understanding. To learn how a neural network operates, there is nothing like
getting down and actually manipulating a network "with your bare hands."

     To provide basic insight into how  a neural network operates, we have
implemented a primitive neural network using the well-known spreadsheet pro-
gram Lotus 1-2-3 (version 2.2).  We will teach this network to recognize and
respond appropriately to several input patterns.  We chose Lotus 1-2-3 rather
than a conventional programming language for several reasons:  (1) For most
of us, spreadsheets are much easier to understand than programming languages;
(2) spreadsheet programs capable of reading .WK1 files are widely available,
and many people are already familiar with their use; (3) spreadsheet parameters
can be freely altered in the middle  of a computational run and their effects
observed; (4) after struggling with the problems associated with individually
referencing numerous worksheet cells, we develop a deep appreciation for the
power of matrix notation and how it vastly simplifies the formulation of
mathematical problems.

     The following text describes how to use the spreadsheet in terms of
the old Lotus version 2.2 command-line. All modern spreadsheet programs have
easy equivalents to these commands.

     A spreadsheet is a rectangular array of cells.  Within each cell may be
contained (1) a numeric value, (2) a label, or (3) a formula.  Each cell is
referenced to by a letter-number combination.  For example, B3 refers to
column B, row 3.  A typical formula in a cell may be "+C3+D3", which means,
"Add the values of cells C3 and D3."

     Move the supplied worksheet file NEURAL.WK1 to the default directory of
your preferred Lotus-compatible spreadsheet program.  Run the spreadsheet
program, and retrieve the worksheet file. If using Lotus, use the following
keystrokes (ignore the stuff within the parentheses):

               /F(ile) R(etrieve) NEURAL <Enter>

     You should see something like the following:

         A               B                    C                    D
1   weight 1    +B1+B7*B9*(B11-B12)  +C1+B7*B9*(C11-C12)  +D1+B7*B9*(D11-D12)
2   weight 2    +B2+B7*C9*(B11-B12)  +C2+B7*C9*(C11-C12)  +D2+B7*C9*(D11-D12)
3   weight 3    +B3+B7*D9*(B11-B12)  +C3+B7*D9*(C11-C12)  +D3+B7*D9*(D11-D12)
4
5   activity    +B1*B9+B2*C9+B3*D9   +C1*B9+C2*C9+C3*D9   +D1*B9+D2*C9+D3*D9
6
7   NU                         0.1
8
9   INPUT                        1                    1                    0
10
11  TEACHER                      1                    0                    0
12  ACTIVITY                  0.00                 0.00                 0.00
13
14  WEIGHT 1                0.0000               0.0000               0.0000
15  WEIGHT 2                0.0000               0.0000               0.0000
16  WEIGHT 3                0.0000               0.0000               0.0000

     The formulas in cells B1 through D3 have so-called "circular" refer-
ences.  For example, in spreadsheet cell B1, we have inserted the formula:

                    +B1+B7*B9*(B11-B12)

which means, "To the pre-existing value in the cell B1, add the product of B7,
B9, and (B11-B12)."
     The values calculated by the formulas in cells B1 through D3 are being
displayed in cells B14 through D17.  Likewise, the values calculated by the
formulas in B5 through D5 are being displayed in cells B12 through D12.

     Notice that the INPUT row, the TEACHER row, and the NU row are high-
lighted.  These are the only cells into which you may enter values.  All  the
other cells contain the results of calculations that you initiate by pressing
the Recalculate <F9> key.  If you try to enter values into these protected
cells the program will complain with a "beep."

     Neural networks attempt to emulate the learning processes that go on in
the brain.  Consider how you might teach a child the alphabet:  Your child
sees a roof-shaped angle with a cross bar.  You, the teacher, say "that's an
'A,'  honey."  If she says "P," you respond, "No, no, that's an 'A,' honey."
You repeat the process until she gets it right.  Then you try the next letter,
a vertical bar with two mounds to the right.  You say, "that's a 'B,' honey."
When she eventually gets "B" right you go back to "A" and double-check that
she is not mis-identifying everything as "B."  Then you go on to the next
letter and the next...

     In the following exercise, you will attempt to teach the neural network
three "letters" of a primitive "alphabet."  The process will involve multiple
iterations of the following sequence:

Input: The network sees the letter "1 0 1"     Teacher: Say "1 0 0," honey!
               (repeat until the network gets it right)
Input: The network sees the letter "1 1 0"     Teacher: Say "0 1 0," honey!
               (repeat until the network gets it right)
Input: The network sees the letter "0 1 1"     Teacher: Say "0 0 1," honey!
               (repeat until the network gets it right)
Repeat with the first letter and so on, until the network gets all of the
letters right.


                   How to Use the NEURAL spreadsheet:
-----------------------------------------------------------------------------

     Begin by double-checking that the NU parameter B7 is set to 0.1.  If
not, move the cursor to B7 and type 0.1 <Enter>

     Lotus note:  If somehow you've done something wrong and you find yourself
stuck, hit the <Esc> key and keep hitting it until you get into the "READY"
mode.  (Pay attention to the upper right hand corner of the screen to find out
when you're safely back in the "READY" mode.)

LETTER 1: We start by teaching the program that the letter "1 0 1" means
"1 0 0"

     STEP 1:  Using arrow keys, move the cursor to line 9, the Input line:
          Move the cursor to B9.  Type 1
          Move the cursor to C9.  Type 0
          Move the cursor to D9.  Type 1

     STEP 2:  Using arrow keys, move to cursor to line 11, the Teacher line:
          Move the cursor to B11.  Type 1
          Move the cursor to C11.  Type 0
          Move the cursor to D11.  Type 0
     Type <Enter> or move the cursor to enter the last value into D11.

     STEP 3:  Recalculate the spreadsheet. (If using Lotus, hit the <F9> key.)

     STEP 4:  Do the values in line 12, the ACTIVITY line, more or less ap-
     proximately equal the values in the TEACHER line?  If not, keep hitting
     Recalculate until the ACTIVITY line more or less approximately equals the
     TEACHER line.

LETTER 2: Next, we teach the program that the letter "1 1 0" means "0 1 0"

     STEP 1:  Using arrow keys, move the cursor to line 9, the Input line:
          Move the cursor to B9.  Type 1 
          Move the cursor to C9.  Type 1
          Move the cursor to D9.  Type 0 

     STEP 2:  Using arrow keys, move to cursor to line 11, the Teacher line:
          Move the cursor to B11.  Type 0
          Move the cursor to C11.  Type 1 
          Move the cursor to D11.  Type 0 
     Type <Enter> or move the cursor to enter the last value into D11.

     STEP 3:  Recalculate the spreadsheet. (If using Lotus, hit the <F9> key.)

     STEP 4:  Do the values in line 12, the ACTIVITY line, more or less ap-
     proximately equal the values in the TEACHER line?  If not, keep hitting
     Recalculate until the ACTIVITY line more or less approximately equals the
     TEACHER line.

LETTER 3: Next, we teach the program that the letter "0 1 1" means "0 0 1"

     STEP 1:  Using arrow keys, move the cursor to line 9, the Input line:
          Move the cursor to B9.  Type 0 
          Move the cursor to C9.  Type 1
          Move the cursor to D9.  Type 1 

     STEP 2:  Using arrow keys, move to cursor to line 11, the Teacher line:
          Move the cursor to B11.  Type 0
          Move the cursor to C11.  Type 1 
          Move the cursor to D11.  Type 1 
     Type <Enter> or move the cursor to enter the last value into D11.

     STEP 3:  Recalculate the spreadsheet. (If using Lotus, hit the <F9> key.)

     STEP 4:  Do the values in line 12, the ACTIVITY line, more or less ap-
     proximately equal the values in the TEACHER line?  If not, keep hitting 
     Recalculate until the ACTIVITY line more or less approximately equals the
     TEACHER line.

START OVER AGAIN WITH LETTER 1, and keep going (have patience!) until the
network satisfactorily identifies all three letters!

     Notice how the values in the matrix B14 through D17 change during this
learning process.

     When the network has finally identified all the letters to your satis-
faction, set the NU parameter, B7, to 0.  This will prevent any inadvertent
changes in the network parameters.

     Save the modified neural network. If using Lotus, use the following
keystroke sequence:
          /F(ile) S(ave) NEURAL1 <Enter>

     Exit the program. If using Lotus, use the following keystroke sequence:
          /Q(uit) Y(es)

                    Why the NEURAL spreadsheet works:
-----------------------------------------------------------------------------

     OK, so you've taught this neural network to read a few simple "letters."
What went on?  

     The above spreadsheet implements a simple associator model.  The output 
activity is a weighted sum of the inputs.  The weights applied to the inputs
are modified using the Widrow-Hoff rule.

     The following shows the actual results after a short teaching session:

         A               B                    C                    D
1   weight 1    +B1+B7*B9*(B11-B12)  +C1+B7*B9*(C11-C12)  +D1+B7*B9*(D11-D12)
2   weight 2    +B2+B7*C9*(B11-B12)  +C2+B7*C9*(C11-C12)  +D2+B7*C9*(D11-D12)
3   weight 3    +B3+B7*D9*(B11-B12)  +C3+B7*D9*(C11-C12)  +D3+B7*D9*(D11-D12)
4
5   activity    +B1*B9+B2*C9+B3*D9   +C1*B9+C2*C9+C3*D9   +D1*B9+D2*C9+D3*D9
6
7   NU                         0.1
8
9   INPUT                        1                  0                  1
10
11  TEACHER                      1                  0                  0
12  ACTIVITY                  1.02               0.00              -0.02
13
14  WEIGHT 1                0.4690             0.5436            -0.5127
15  WEIGHT 2               -0.5425             0.5111             0.5323
16  WEIGHT 3                0.5492            -0.5410             0.4917

     The complete input matrix I of input patterns is

                                 1 0 1
                      I   =      1 1 0
                                 0 1 1

     Each ROW of the input matrix represents an input pattern.

     The pattern associator matrix A is

                                 0.4690  0.5436 -0.5127
                      A   =     -0.5425  0.5111  0.5323
                                 0.5492 -0.5410  0.4917

     The output matrix O at this stage of training is

                                 1.02  0.00 -0.02
                      O   =     -0.07  1.05  0.02
                                 0.01 -0.03  1.02

     Each row of the output matrix represents an output pattern.

     Each row component of the output matrix O represents a weighted sum of  
the row components of the input matrix I, the weights being taken from the  
COLUMNS of the pattern associator matrix A.  In the given example output,

          1.02  =  1 *  0.4690 + 0 * -0.5425 + 1 *  0.5492
          0.00  =  1 *  0.5436 + 0 *  0.5111 + 1 * -0.5410
         -0.02  =  1 * -0.5127 + 0 *  0.5323 + 1 *  0.4917 

     In matrix notation, we can  abbreviate the lengthy formulas given in 
cells B2 through D4 of the spreadsheet by simply saying that O is the matrix 
product of I and A.  In other  words, we simply write

                         O = I x A

     This matrix multiplication process is responsible for associating an
input pattern with an output pattern. 
     During the learning process, the components of the pattern associator 
matrix are modified.  The formulas in cells B2 through D4 of the spreadsheet 
implement the Widrow-Hoff rule, or delta rule.  

     To understand the Widrow-Hoff or delta rule, let us examine the formula
in cell B1:
                       +B1+B7*B9*(B11-B12)

     The pre-existing weight in cell B1 is modified by the addition of the 
product B7*B9*(B11-B12).

     B11-B12 represents the difference between the TEACHER value and the  
actual ACTIVITY of the element.  The bigger the difference, the greater the  
correction that must be applied to the pre-existing weight in cell B1.

     B9 represents a component of the INPUT matrix.  The value of the input 
component dictates in part the strength of any corrections that would have to 
be applied to the pre-existing weight in cell B1.

     B7 represents NU, which regulates how quickly or slowly the neuron is 
permitted to change its state of activation.  This is commonly referred to as 
the "proportional learning rate."

     Matrix subscript notation can be used to express in one single line all 
the formulas placed in cells B1 through D3 of the spreadsheet.  

     The neural network shown here has only a single layer of connections
(weights) between input and output. Single-layer neural networks (perceptrons)
have many limitations.  For instance, single-layer networks are intrinsically
incapable of solving "exclusive-or" problems.  What this means is, there are
many  "alphabets" that a single-layer network is incapable of learning.  In
other words, you cannot always find a pattern associator matrix A which will
successfully correlate an input matrix I with an output matrix O.  In order to
successfully correlate arbitrary input matrices with arbitrary output matrices,
you need to implement additional layers of neurons along with some sort of
"back-propagation" scheme.

     Hopefully, with actual hands-on experience with the simple neural net-
work presented here, you should be better equipped to understand the explana-
tions of more complex systems.  Try out some of the other, more sophisticated
neural network programs available in shareware, and have fun!

------------------------------------------------------------------------------

     This simple introduction to neural networks is SHAREWARE.  If you've 
found that the supplied worksheet and documentation has helped your under-
standing of neural networks, the authors would like you to show your apprecia-
tion with monetary support.  

                              TO REGISTER:                             
   Don't send us any money.  Instead, make a charitable donation (as much as 
   you can easily afford) to the Multiple Sclerosis Foundation or any other 
   medically or socially-oriented charitable organization of your choice, and
   write us a letter telling us to whom you made a donation.  You don't have
   to tell us how much you donated.


                                        Yours truly,
                                        Thomas and Dafna Yee
                                        Awareness Productions
                                        P.O. Box 261262
                                        Plano, TX 75026-1262

                                        CompuServe: 75262,1471
                                        Internet: 75262.1471@compuserve.com

LEGAL STUFF
-----------

"Lotus" and "1-2-3" are registered trademarks of Lotus Development Corpora-
tion.

The authors assume no responsibility for any damage or loss caused by the use 
of the supplied worksheet and accompanying documentation.  If you can think of 
any way that a trivial little worksheet like the one supplied here can cause 
you damage or loss, you've got some sort of terribly suspicious mind!

If you register, the assumption is that you've actually tested out the sup-
plied worksheet and found that it is compatible with your spreadsheet program.
No refunds will be made if you subsequently find anything to be wrong. You're
stuck with that charitable donation!

                              WHO ARE WE?

Awareness Productions is a small software development company specializing in
educational shareware. Our products are distributed worldwide through thousands
of bulletin boards as well as through commercial vendors and on-line information
services such as CompuServe, Prodigy, and America On-Line. We are a member of 
the Association of Shareware Professionals.  At present (mid-1995), our product
line includes the following:

WRITE CHINESE V1.54    BBS name: CHINA154.ZIP   CompuServe name: CHINA.ZIP
Program to teach basic Chinese calligraphy. This critically acclaimed program
presents 150 of the most frequently used characters in traditional and 
simplified forms. Students practice drawing characters in a "nine-square box," 
learn Mandarin and Cantonese transliterations, and memorize definitions. 
Requires '286, VGA, mouse, HD.

MERLIN'S MATH V1.20    BBS name: MERLN120.ZIP   CompuServe name: MERLN.ZIP
Episode 1 of the Merlin's Math series teaches multiplication of multiple
digit numbers, up to 5 by 5 digits. Stating as an apprentice magician, the 
student gains in power by solving math problems of increasing difficulty.
Registration brings you Episode 2, teaching long division.
Requires '286 or higher, mouse, and VGA.

MERLIN'S MUSIC V1.02    BBS name: MUSIC102.ZIP   CompuServe name: MMUSIC.ZIP
Fun activities teach basic musical notation. Childen use mouse to enter notes 
on a musical staff and can hear their compositions played on the PC speaker. 
Children can save their masterpieces to work on later. Program also plays 
excerpts of folk tunes, hymns, nursery rhyme music and national anthems. 
Requires '286, VGA, mouse.  Not compatible with Windows.

CULTURAL AWARENESS V1.28    BBS name: AWARE128.ZIP  CompuServe name: AWARE.ZIP
Educational game for cultural literacy. Two levels of play. At Novice level, 
the game is suitable for sixth grade and up. At Advanced level, the game will
challenge adults! Topics include proverbs, idioms, grammar, literature, music,
art, history, geography, world religions, science, math... 
Requires hard drive.

LERNE CHINESISCH SCHREIBEN V1.54   
BBS name: CHINE154.ZIP    CompuServe name: CHINAG.ZIP
Das Programm zur Erlernung der chinesischen Schrift lehrt 150 der Grundlagen
der gebraeuchlichsten Ideogramme in herkoemmlicher und vereinfachter 
Schreibweise.  Die Schueler ueben das Zeichnen der Ideogramme in einem in neun
Quadrate eingeteiltem Rahmen und lernen Mandarin und kantonesische 
Umschreibungen und die Bedeutung der Ideogramme. 
Das Programm erfordert einen '286 mit VGA, HD, und Maus.

To receive a 1.44 M disk of our shareware products, please send us a check for
$6.00 (drawn off a USA bank) or a money order denominated in USA dollars to our
address as given above. We also accept VISA, Mastercard, and American Express. 
If ordering by charge card, remember that in addition to your card number, we 
need the expiration date and your signature.
