Documentation re-generated, a lot of minor stuff

This commit is contained in:
blacklight 2009-08-16 20:57:15 +02:00
parent d52976e74e
commit 7861e56f35
144 changed files with 2589 additions and 851 deletions

View file

@ -1,10 +1,19 @@
This directory contains some sources to illustrate LibNeural++ usage.
learnAdd is a source training a neural network to do simple sums
between two integer positive numbers. The network is then saved to a
binary file called "adder.net". This file can then be used by "doAdd".
In this program you're asked input numbers to the network, and then
the output is given. If executable files would not be present, type
`make` to create them.
* @example examples/learnAdd.cpp Show how to train a network that performs sums between
* two real numbers. The training XML is built from scratch, then saved to a file, then
* the network is initialized using that XML file, trained, and the resulting trained
* network is saved to adder.net. Then, you should take a look at doAdd.cpp to see how
* to use that file to use the network.
Of course, you must have Neural++ already installed before typing `make`
* @example examples/doAdd.cpp Show how to use a network already trained and saved to a
* binary file. In this case, a network trained to simply perform sums between two real
* numbers, that should have already been created using learnAdd.
* @example examples/adderFromScratch.cpp Similar to learnAdd.cpp, but this time the
* training XML is generated as a string and not saved to a file, and parsed by the
* program itself to build the network. Then, the program asks two real numbers, and
* performs both the sum and the difference between them, putting the sum's output on
* the first output neuron and the difference's on the second output neuron. Anyway,
* using more than one neuron in the output layer is strongly discouraged, as the network
* usually won't set correctly the synaptical weights to give satisfying and accurate
* answers for all of the operations.

View file

@ -1,6 +1,12 @@
/**
* This source creates a new neural network able to sum two integer numbers,
* creating the XML containing the training input set on the fly.
* Similar to learnAdd.cpp, but this time the
* training XML is generated as a string and not saved to a file, and parsed by the
* program itself to build the network. Then, the program asks two real numbers, and
* performs both the sum and the difference between them, putting the sum's output on
* the first output neuron and the difference's on the second output neuron. Anyway,
* using more than one neuron in the output layer is strongly discouraged, as the network
* usually won't set correctly the synaptical weights to give satisfying and accurate
* answers for all of the operations.
*
* by BlackLight, 2009
*/

View file

@ -1,6 +1,7 @@
/**
* This source makes sums between two numbers using pre-trained neural network
* saved on "adder.net"
* Show how to use a network already trained and saved to a
* binary file. In this case, a network trained to simply perform sums between two real
* numbers, that should have already been created using learnAdd.
*
* by BlackLight, 2009
*/
@ -17,6 +18,7 @@ int main() {
double a,b;
NeuralNet net;
// Load the pre-trained network from "adder.net" file
try {
net = NeuralNet(NETFILE);
}
@ -36,6 +38,8 @@ int main() {
v.push_back(a);
v.push_back(b);
// Set the numbers just read as input values, propagate those values, and get
// the output
net.setInput(v);
net.propagate();
cout << "Neural net output: " << net.getOutput() << endl;

View file

@ -1,7 +1,9 @@
/**
* This source creates a new neural network able to sum two integer numbers,
* using the training input set saved on "adder.xml" and saving output network
* on "adder.net"
* Show how to train a network that performs sums between
* two real numbers. The training XML is built from scratch, then saved to a file, then
* the network is initialized using that XML file, trained, and the resulting trained
* network is saved to adder.net. Then, you should take a look at doAdd.cpp to see how
* to use that file to use the network.
*
* by BlackLight, 2009
*/
@ -14,17 +16,27 @@
using namespace std;
using namespace neuralpp;
double f (double x) {
return (x <= 0) ? 1 : 0;
}
int main() {
int id = 0;
string xml;
time_t t1, t2;
NeuralNet net(2, 2, 1, 0.005, 1000, 0.1, f);
// Create the neural network. The network is going to have
// => 2 neurons for the input layer
// => 2 neurons for the hidden layer
// => 1 neuron for the output layer
// => a learning rate == 0.005 (just get it doing some tests until satisfied)
// => 1000 learning steps (i.e. the network will be ready after 1000 training steps to adjust the synaptical weights
// => 0.1 as neural threshold (the threshold above which a neuron activates)
NeuralNet net(2, 2, 1, 0.005, 1000, 0.1);
// Initialize a training XML as a string in 'xml'
NeuralNet::initXML(xml);
// Build some training sets for the XML. The format is:
// "input1,input2,...,inputn;output1,output2,...,outputn
// The 'id' variable is passed as reference, starting from 0,
// and it's used to enumerate the sets in the XML file.
xml += NeuralNet::XMLFromSet(id, "2,3;5");
xml += NeuralNet::XMLFromSet(id, "3,2;5");
xml += NeuralNet::XMLFromSet(id, "6,2;8");
@ -35,16 +47,20 @@ int main() {
xml += NeuralNet::XMLFromSet(id, "10,10;20");
NeuralNet::closeXML(xml);
// Save the XML string just created to a file
ofstream out("adder.xml");
out << xml;
out.close();
cout << "Training file adder.xml has been written\n";
// Start the training from the XML file
t1 = time(NULL);
cout << "Training in progress - This may take a while...\n";
net.train("adder.xml", NeuralNet::file);
t2 = time(NULL);
// Save the trained network to a binary file, that can be reloaded from any
// application that is going to use that network
net.save("adder.net");
cout << "Network trained in " << (t2-t1) << " seconds. You can use adder.net file now to load this network\n";
return 0;