Perceptron

This module creates a layer of (an arbitrary amount of) perceptron (or rather perceptronish) nodes. You have many learning rules and activation types to choose from, and a variable or two to manipulate. The updates to the net can be done instantly at each tick, in batches, or partially from changes in this and the previous tick (called momentum update). Those different updates are called learning types. Finally this module has separate inputs for training and mere activation (calculating an output).

Example XML definition

A simple example

  <module
      class = "Perceptron"
      name = "Perceptron"
  />

Parameters

NameDescriptionTypeDefault
classThe class name for the module; must be "Perceptron".string
nameThe name of this instance of the module.string
rand_weights_minLower limit of the initial randomized weights.float-0.5
rand_weights_maxUpper limit of the initial randomized weights.float0.5
learning_rateThe factor which to multiply the change with.float0.1
learning_rate_modLearning rate modifier How to modify (decrease) the learning rate over time. sqrt has the formula 'learning_rate_now = learning_rate / (0.10 * sqrt(tick + 100));' and log has the formula 'learning_rate_now = learning_rate / (0.42 * ikaros::log(tick + 10));'choices:
none
sqrt
log
none
biasA bias is an extra node that has a fixed weight. This is the value of that node. 0 is not an allowed value.float1.0
learning_typeDecides if the nodes should learn immediately, smeared out (momentum), or in batches.choices:
instant
batch
momentum
instant
momentum_ratioIf momentum is used, this is the percentage of the learning taken from the previous tick.float0.42
activation_typeWhat kind of activations the nodes should give.choices:
step
sign
sigmoid
tanh
step
learning_ruleWhich learning rule? They correspond, in order, to the following formulas in the book 'Fundamentals of Artificial Neural Networks', by Hassoun: (3.1.2, 3.1.16, 3.1.29, 3.1.30, 3.1.35, 3.1.50). [NOTE: About 'rosenblatt_margin' and 'may'. These learning rules do not work with the 'step' and 'sigmoid' activation types. More specifically, these learning rules expect the targets to be 0/1 while rosenblatt_margin and may expect the targets to be -1/1. So, even if you do not use step/sigmoid, make sure the targets are -1/1 and nothing else]. [NOTE: About 'delta'. This learning rule only works with activation types 'tanh' and 'sigmoid'].choices:
rosenblatt
rosenblatt_margin
may
alpha_lms
mu_lms
delta
rosenblatt
batch_sizeIf learning type batch is used, how big should the batch be? That is, after how many ticks should the nodes be updated?int42
step_thresholdIf activation type step is used, how big should the threshold for activation to occur be?float0.0
marginFor rosenblatt_margin and may learning rules.float0.2
alphaFor alpha_lms learning rule.float0.1
muFor mu learning rule.float0.1
betaFor delta learning rule.float1.0
correct_average_sizeFrom how many previous ticks to calculate the CORRECT output.int42
normalize_targetThe different activation types expect different target values, sometimes -1/1 and sometimes 0/1. If normalize_target is set to true the module tries to convert your target values to the expected values if they do not suit the activation type chosen. More specifically: with step/sigmoid the target will be set to 0.0 if it is 0.0 or less, otherwise set to 1.0, and with sign/tanh the target will be set to -1.0 if it is 0.0 or less, otherwise set to 1.0.boolfalse

Module Connections

Inputs

NameDescription
INPUTThe inputs to the perceptrons to calculate the output from. An array of floats.
T_INPUTThe inputs to the perceptrons to learn from. An array of floats.
T_TARGETThe targets of the perceptrons (their desired output when training). This array is expected to be filled with 0/1 or -1/1, depending on which activation_type (step, sign, sigmoid or tanh) will be used. See the normalize_target parameter. Determines the amount of perceptrons the layer will have.
TRAINArray with one single value. If this value is 0 in a certain tick, the module will not do any training, otherwise it (tries to) learn.

Outputs

NameDescription
OUTPUTThe output of the last input. This array will be the same size as the TARGET input. It will contain 0/1's or -1/1's depending on which activation_type is used.
ERRORThe error from the last input and target. An array of floats, which is the root of the sum of the squared difference between the targets and outputs...
CORRECTArray with one float value. This is the percentage of how many examples that were correctly classified lately (how many examples it averages over depends on the parameter correct_average_size.

References

Mohamad H. Hassoun (1995). Fundamentals of Artificial Neural Networks. MIT Press.

Author

Alexander Kolodziej
LUCS

Files

Perceptron.h
Perceptron.cc
Perceptron.ikc

blog comments powered by Disqus