Earlier in these notes I used the Rescorla-Wagner model of associative learning as an example of how to implement computational models of cognition in R. In this and later sections, I’ll expand the dicussion of models to cover a variety of other models in the field. I’ll start with the backpropagation rule for learning in connectionist networks.

0.1 Scripts and data set

  • The iris_recode.csv file contains the classic iris data slightly reorganised as purely numeric data (here) is the script that generated it.
  • The first version of the modelling code implements a simple two-layer backpropagation network for the iris data: iris_twolayer.R
  • The second version of the code implements the same model, but expressing the learning rules as matrix operations in order to speed up the calculations: iris_twolayer2.R

At the moment the scripts don’t do anything other than learn a classification rule. The goal for the full exercise will (eventually) be to examine what the model is learning across the series of “epochs”, and consider the relationship between this connectionist network and a probabilistic logistic regression model, but for now it’s a bit simpler than that!

In this tutorial we’ll only cover a very simple version of a backpropagation network, the two-layer “perceptron” model. There are two versions of the code posted above. The code in the iris_twolayer.R script is probably the more intuitive version, as it updates the association weights one at a time, but R code runs much faster when you express the learning rule using matrix operations, which is hwat the iris_twolayer2.R version does. Let’s start with a walk through of the more intuitive version…

0.2 Input and output patterns

First, let’s take a look at the training data. I’m going to use the classic “iris” data set that comes bundled with R, but I’ve reorganised the data in a form that is a little bit more useful for thinking about the learning problem involved, and expressed it as a numeric matrix.

irises <- read_csv("./data/iris_recode.csv") %>% as.matrix()
## Parsed with column specification:
## cols(
##   sepal_length = col_double(),
##   sepal_width = col_double(),
##   petal_length = col_double(),
##   petal_width = col_double(),
##   context = col_integer(),
##   species_setosa = col_integer(),
##   species_versicolor = col_integer(),
##   species_virginica = col_integer()
## )

This data set has columns containing many features. First there are the input features, which consist of two features relating to the petals, two features relating to the sepal, and a context feature that is 1 for every flower. Additionally, there are three binary valued output features corresponding to the species of each flower, dummy coded so that only the correct species has value 1 and the incorrect species have value 0. Here are the names:

input_names <- c("sepal_length", "sepal_width", "petal_length", "petal_width", "context")
output_names <- c("species_setosa", "species_versicolor", "species_virginica")

So for the first flower, the network would be given this pattern as input:

input <- irises[1, input_names]
## sepal_length  sepal_width petal_length  petal_width      context 
##          5.1          3.5          1.4          0.2          1.0

and we need to train it to produce this target pattern as the output:

target <- irises[1, output_names]
##     species_setosa species_versicolor  species_virginica 
##                  1                  0                  0

0.3 Connection weights between input and output

In its simplest form we can describe the knowledge possessed by our network as a set of associative strengths between every input feature and every output feature. In that sense we can think of it as a generalisation of how the Rescorla-Wagner model represents knowledge:

n_input <- length(input_names)
n_output <- length(output_names)
n_weights <- n_input * n_output

So what we’ll do is create a weight matrix that sets the initial associative strength to zero, with a tiny bit of random noise added to each of these associative weights:

weight <- matrix(
  data = rnorm(n_weights) *.01,
  nrow = n_input,
  ncol = n_output,
  dimnames = list(input_names, output_names)
##              species_setosa species_versicolor species_virginica
## sepal_length  -0.0006319492       -0.020747209      -0.009599228
## sepal_width   -0.0015100447        0.012470262       0.017131124
## petal_length  -0.0019515272        0.002459665      -0.007272674
## petal_width    0.0049298408        0.002108018       0.006111287
## context        0.0151908952        0.008725164      -0.009562111

Here’s the network we want to code:

While we’re at it, store a copy for later:

old_weight <- weight

0.4 Making predictions

In the Rescorla-Wagner model, when the learner is shown a compound stimulus with elements A and B with individual associative strengths \(v_A\) and \(v_B\), the association strength for the compound AB is assumed to be additive \(v_{AB} = v_{A} + v_{B}\). We could do this for our backpropagation network too, but it is much more common to assume a logistic activation function. So we’ll need to define this activation function:

logit <- function(x){
  y <- 1/(1 + exp(-x))

So what we do is first take the sum of the inputs and then pass them through our new logit function. So let’s say we want to compute the strength associated with the first species:

output_1 <- sum(input * weight[,1]) %>% logit() 
## [1] 0.5012342

More generally though we can loop over the three species:

# initialise the output nodes at zero
output <- rep(0, n_output)
names(output) <- output_names

# feed forward to every output node by taking a weighted sum of
# the inputs and passing it through a logit function
for(o in 1:n_output) {
  output[o] <- sum(input * weight[,o]) %>% logit() 

# print the result
##     species_setosa species_versicolor  species_virginica 
##          0.5012342          0.4876089          0.4981203

As you can see, initially the model has no knowledge at all! It’s predicting a value of about 0.5 for every category!

0.5 Learning from error

The prediction error is very familiar:

prediction_error <- target - output
##     species_setosa species_versicolor  species_virginica 
##          0.4987658         -0.4876089         -0.4981203

Here is the code implementing the learning rule. What we’re doing is looping over every weight in the network, and then adjusting the strength proportional to the prediction error:

learning_rate <- .1

# for each of the weights connecting to an output node...
for(o in 1:n_output) {
  for(i in 1:n_input) {
    # associative learning for this weight scales in a manner that depends on
    # both the input value and output value. this is similar to the way that
    # Rescorla-Wagner has CS scaling (alpha) and US scaling (beta) parameters
    # but the specifics are slightly different (Equation 5 & 6 in the paper)
    scale_io <- input[i] * output[o] * (1-output[o]) 
    # adjust the weights proportional to the error and the scaling (Equation 8)
    weight[i,o] <- weight[i,o] + (prediction_error[o] * scale_io * learning_rate)

(Let’s not worry too much about the scale_io factor for now). So let’s look at the input, output, target, and prediction error:

## sepal_length  sepal_width petal_length  petal_width      context 
##          5.1          3.5          1.4          0.2          1.0
##     species_setosa species_versicolor  species_virginica 
##          0.5012342          0.4876089          0.4981203
##     species_setosa species_versicolor  species_virginica 
##                  1                  0                  0
##     species_setosa species_versicolor  species_virginica 
##          0.4987658         -0.4876089         -0.4981203

Now let’s look at how the weights changed:

weight - old_weight
##              species_setosa species_versicolor species_virginica
## sepal_length    0.063592258       -0.062131952      -0.063509444
## sepal_width     0.043641746       -0.042639575      -0.043584913
## petal_length    0.017456698       -0.017055830      -0.017433965
## petal_width     0.002493814       -0.002436547      -0.002490566
## context         0.012469070       -0.012182736      -0.012452832

Not surprisingly everything to setosa has gone up and the others down. But notice the scale!

0.6 Visualising the learning

For the actual simulation I’ll set the learning rate to .01, run it for 5000 epochs and average across 100 independent runs just to smooth out any artifacts of randomisation1 How the weights change over epochs:

0.7 Resources

  • The Rumelhart et al (1986) paper cached for teaching purposes here
  • A very good, but somewhat technical summary of backpropagation by Michael Nielsen
  • Really nice resources in Python by Olivia Guest.

  1. The script to run it is here, a csv with the connections weights is here, another one (36Mb) with sum squared error to each item on each presentation here, zipped version (11Mb) here