CS30 -- Introduction to Computer Science II
Fall, 2006
Assignment 5
(updated 9/27/2006)

Grading scheme: here.

Exercises:
[no exercises this week.]

Programming Projects: (Chapter 6, pgs 149-151)
1. (from the Budd text) Programming project 4.  (Modify the Vector, a-f)  Clarification on 4.e:  Extract the elements including the one at the lower index up to, but excluding the upper index.

2.
(from the Budd text) Programming project 12.  (Fast recursive Fibonacci)

3.
More with NeuralNetUnits.  This week, you will extend your LTU so that it can learn a particular output function.

First, in case you didn't do this to start with, you should treat the threshold as just another weight with an artificial input that is constantly -1.  Thus, w0 is usually the effective threshold with a constant -1 input, x0.  Then w1 through wn are the weights for the actual inputs to the LTU.  When you compute the weighted sum from 0 through n, the sum will be greater than 0 when the weighted sum of the actual inputs exceed the threshold (weight on x0).

Second, in order to train an LTU, we need to distinguish the actual activation from the symbolic value we associate with a particular activation.  For example, an activation of 0.91 may be treated as an output of "1" but we will need the 0.91 when it comes time to train the entire network (in which an LTU takes part).

Third, rather than using the step function we used before to determine a unit's output, we can use a sigmoid function that has certain nice properties.  The commonly used function is: sigma(x)=1/(1+e-x), where is the weighted sum for a given node.  For example, a node j, would have weighted sum Xj (including the threshold, x0), and would have an output, Yj = 1/(1+e-Xj). 

At last, we get to training.  An error signal for node k is the difference between the desired output and the actual output (as given by the sigmoid function).  Thus, ek=dk-yk, where ek is the error for desired output dk and actual output yk.  Now we want to use the error to follow the gradient toward the ideal weight settings for this node, k.  So taking the derivative we get a error gradient, gk=yk (1-yk) ek.

Finally, during training, wjk gets replaced by wjk + a xj gk for weight wj and input xj of node k (the LTU in question).  The variable a is a learning rate parameter that you can hold constant at some small value less than 1.0.

For this week's assignment, extend your LTU class to also implement the three methods from the NeuralNetUnit interface that we ignored the first time through.  These are: updateWeights, computeGradient, and getGradient.  You should be able to write a main method (or a driver class) that starts with a new LTU having random weights (and threshold), and then trains it by looping so that it eventually computes a basic boolean function on the given inputs.

Submission Instructions:
On your machine where you are doing your homework, create a folder called <your email name> followed by "A5".  For example, someone with email address "cjones" would create a folder called "cjonesA5".  Inside that folder, place plain text file(s) containing your answers to any exercises.  Also, place whatever Java files are necessary for your Programming Projects in the same folder.  Finally, either tar or zip the folder so that when someone extracts it, the folder "<emailname>A5" will be created.  Submit via Eureka -- I have increased the size limit on submissions.