Optimization is the process of making the DBNN method better. In the implementation, the aim is to optimize the DBNN parameters in order to improve the performance and the quality of the DL network structures.
In order to optimize the DBNN parameters, we apply an evolutionary algorithm known as GA. A real value parallel GA is employed in our optimization process, which outperforms on the single population GA in terms of the quality of the solution. Using parallel GA, we optimize the number of hidden units, the number of epochs, and the learning rates to reduce the error rate and training time. Our main contributions in parallel GA are the design of the fitness functions and the design of the parameters in the GA structure.
The main goal is to find the optimal number of hidden units, epoch numbers, and learning rates. Therefore, the fitness is evaluated to minimize the error rate and network training time. The fitness function is defined as follows
where eBBP is the number of misclassification divided by the total number of test data before back propagation, eABP is the number of misclassification divided by the total number of test data after back propagation, tBBP is the training time before back propagation, and tDBP is the training time during back propagation operation.
The GA functions and parameters are presented in Table 1. A variety of mutation rates have been tried and the following mutation rates are found to be the best.
Function name Parameters Number of subpopulations 4 Initial number of individuals (subpopulation) 25, 25, 25, 25 Crossover probability 0.8 Mutation rate (subpopulation) 0.100, 0.030, 0.010, 0.003 Isolation time 10 generations Migration rate 10% Results on screen Every 1 generation Competition rate 10% Termination 30 generations
Table 1. GA functions and parameters
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping
- Received Date: 2016-11-03
- Rev Recd Date: 2017-12-01
- Publish Date: 2018-03-01
- Deep learning (DL) /
- deep belief neural network (DBNN) /
- genetic algorithm (GA) /
- object recognition /
- robot grasping
Abstract: The performance of deep learning (DL) networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm (GA) based deep belief neural network (DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-and-place operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
|Citation:||Delowar Hossain, Genci Capi, Mitsuru Jindai. Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping[J]. Journal of Electronic Science and Technology, 2018, 16(1): 11-15. doi: 10.11989/JEST.1674-862X.61103113|