In my previous post (see MLP: Incremental learning), I demonstrated that Incremental learning does not work for MLP. So how can we make MLP learn several functions ?
The solution is to mix the entries in the training set. If we create 4000 groups containing 1 entry for the AND operator, then 1 entry for the OR operator, then 1 entry for the NOT operator and then 1 entry for the XOR operator, the MLP will learn all the functions simultaneously.
The tests
The training set was created as explained above. Since neurons compute the output from parameters which are real numbers, information like AND, OR, NOT, XOR can not be processed directly. AND , OR, NOT, XOR functions were converted using the following rule: AND = 0 OR = 0.25 NOT = 0.5 XOR = 0.75.
See below for an example of an entry of the training set for the OR function
in: 0.25 1.0 0.0
out: 1.0
Results of the tests
A picture being worth a 1000 words, let’s see the results for several topologies:
And now the result of the learning for a MLP with 3441 topology:
Logical function | 1rst operand | 2nd operand | Output |
---|---|---|---|
AND | 0 | 0 | -0.00712756 |
AND | 0 | 1 | 0.00900855 |
AND | 1 | 0 | 0.0128678 |
AND | 1 | 1 | 0.993427 |
OR | 0 | 0 | -0.000365019 |
OR | 0 | 1 | 0.9765926 |
OR | 1 | 0 | 0.996321 |
OR | 1 | 1 | 0.995992 |
NOT | 0 | 0 | 0.998554 |
NOT | 0 | 1 | 0.999003 |
NOT | 1 | 0 | -0.00915351 |
NOT | 1 | 1 | 0.0309055 |
XOR | 0 | 0 | 0.0584819 |
XOR | 0 | 1 | 0.974808 |
XOR | 1 | 0 | 0.966551 |
XOR | 1 | 1 | 0.00433713 |
As we can see, the MLP was able to learn all logical operators simultaneously.