Variants of evolutionary computation are often used to optimize the weight matrix.
The weight matrix of an attractor neural network is said to follow the Storkey learning rule if it obeys:
In the following example, one has a weight matrix of 3 different sequences, without gaps.
When the errors are uncorrelated, it is convenient to simplify the calculations to factor the weight matrix as .
The customized Gibbs virtually perfectly recovers all the synthetic weight matrices from dataset 2.
Reference [ 6 ] describes another approach to locating modules from clusters of known weight matrices.
(b) The sum of every row of the weight matrix equals 1.
There are two types of weight matrices.
A very common strategy in connectionist learning methods is to incorporate gradient descent over an error surface in a space defined by the weight matrix.
(Of course, finding such a weight matrix is more challenging with some problems than with others.)