NN is an iterative process which changes its input weights and biases to achieve the minimum mean square error (MSE). It is using feedback propagation loop which is using Lquenberg algorithm. This algorithm iterates locally which means it doesn’t guarantee the convergence of all minima points. it may skip some combinations of input weights and biases which may reduce the MSE more. To avoid this issue we have adapted the optimisation method named Gravitational Search Algorithm (GSA) which is explained in previous chapter. It is based on the movement of celestial bodies and position of these agents are input weights and biases in our case. The output of NN is calculated by formula.
where IW are the input weights and B are the biases. The number of input weights and biases depends upon the number of hidden layers. The GSA algorithm is supposed to tune these values. For this purpose first the Neural network is created in MATLAB. That network will be used further for optimisation algorithm. We have use the German dataset downloaded from UCI machine learning repository . This dataset contains 20 attributes along with a label of good and bad. If label is 1, those attributes are for non fraud case and vice versa. We have used German Credit Score Dataset. In out proposed algorithm of optimised neural network we need the numeric dataset, so this dataset in numeric format is also available on the same web link.
Neural Network optimization by GSA
The proposed work is to tune NN to get the high accuracy and less mean square error. To achieve this aim we use GSA optimisation and tuned NN’s weights and biases. in every optimisation task, it is required that an objective function must be set which calculates the target value like MSE in our case. This objective function will be called in each iteration and for each agent in that iteration. since the neural network is already created and trained in previous step so it is not required to create again every time when objective function is called as our objective function updates the pre trained NN’s weights and biases which are 251 in numbers and calculate the MSE for those set of weights and biases. The developed objective function snippet is shown in table.
function [performance,net]=Objective_function(L,~,net,input, target) % input - input data. % target - target data. x = input'; t = target'; t(t==2)=0; net=setwb(net,L); % set the input weights and biases of NN using values in 'L' % Test the Network y = net(x); e = gsubtract(t,y); performance = perform(net,t,y);
GSA is based on its agents’ movements and an agent’s position is represented by the weights and biases values. The number of co- ordinates of an agent’s position is equal to total number of input weights and biases. In our case this number is 251. These weights and biases are the positions of all agent used in the optimisation and updated as per the equations quoted in chapter 3. These weights and biases can be fetched from generated neural network by using a MATLAB function ‘getwb’ and after updating these are set back to NN by ‘setwb’. The significance of GSA terminology with NN tuning is provided in table 4.3.
Table 4.3: Significance of GSA terminology in NN tuning
In NN tuning significance
Input weights and biases
Dimension for optimisation/ number of variables to be tuned
Total number of input weights and biases
Update in the position of agents
Change the values of weights and biases to move towards minimum MSE
A complete step by step algorithm is explained below.
Load the German credit card fraud dataset in numeric format and divide that into random 70/30 ratio for training and testing of neural network.
generate the NN script to create and train the network whose weights and biases are to be optimised.
initialise the GSA parameters like number of iterations, number of agents, initial G0 and alpha. Pass the previously created network into GSA to get the dimension of weights and biases.
randomly initialise the new input weights and biases to give an initial seed to GSA optimisation. These must be within a boundary as given in next chapter.
call the objective function to update the neural network’s weights and biases and calculate the MSE for those values by using the testing dataset.
to update the random positions of agents, force and mass has to be calculated by using the equations
The new updated position is obtained from the formula
the velocity in this case is calculated by using acceleration which is based on force and mass calculated in previous step.
For this new updated position or values of weights and biases, objective function is again called and MSE is saved.
The weights and biases for which minimum of MSE is obtained out of previous two set of values, is further considered for updating.
This process continues till all iterations are not completed.
The final minimum MSE is obtained and weights ad biases set for them is used as final NN weights and biases which gives less MSE than conventional NN and Simulated Annealing tuned NN which was done previously by Khan A. et.al .
By following the methodology the accuracy improvement over RBFNN is
Table: % improvement of GSA tuned NN over other algorithms
GSA vs SA (%)
GSA vs NN (%)