• Keine Ergebnisse gefunden

Multiple neural network models were tested to achieve different results and compare them. Two datasets were used for testing - raw dataset of 226 images and augmented dataset of 2260 images. Two neural network structures were used for testing - simple NN with hidden layer and CNN where the convolutional layer is added to the simple NN.

Different counts of epochs were tried when training the models. Based on the options above, 7 different combinations were generated and 7 different models were trained as per below:

• Simple NN with raw data and 10 epochs of training;

• simple NN with raw data and 25 epochs of training;

• Simple NN with raw data and 100 epochs of training;

• simple NN with augmented data and 10 epochs of training;

• Simple NN with augmented data and 25 epochs of training;

• CNN with augmented data and 25 epochs;

• CNN with augmented data and 100 epochs of training.

2 4 6 8 10 70

75 80 85

Epoch

Accuracy(%)

simple NN with raw data simple NN with augmented data

Figure 16. Accuracy of models based on validation after each epoch of training over 10 epochs.

Figure 16 highlights difference between raw data and augmented data. Two models of simple NN were both trained for 10 epochs and accuracy recorded upon validation after each epoch. The only difference between the models was the dataset used. Using raw data the accuracy upon validation was mainly between 81-83%. Using augmented data the accuracy upon validation was between 85-86%. Based on that using 10 times larger augmented dataset compared to smaller raw dataset increased model accuracy around 3%.

0 5 10 15 20 25 70

80 90 100

Epoch

Accuracy(%)

simple NN with raw data simple NN with augmented data

CNN with augmented data

Figure 17. Accuracy of models based on validation after each epoch of training over 25 epochs.

Figure 17 compares the three most different models over 25 epochs of training. The simple NN with augmented data performs better than simple NN with raw, but the accuracy difference upon validation doesn’t differ more than 5% between these models.

Having introduced convolutional layer to the simple NN forming a CNN, the model outperforms other two models by far when trained with augmented data. The CNN reaches an accuracy of 93-97% upon validation. The other two models don’t manage to get over 90% error rate even after 25 epochs of training. Difference between accuracy of simple NN and convolutional NN is at some points as high as 10%.

0 20 40 60 80 100 70

80 90

Epoch

Accuracy(%)

simple NN with raw data CNN with augmented data

Figure 18. Accuracy of models based on validation after each epoch of training over 100 epochs.

Based on previous results the worst model is simple NN with raw data and the best model is CNN with augmented data. Next, the best and worst model were both trained for 100 epochs to see what accuracy upon validation can be achieved after multiple times more epochs of training compared to the earlier training of 10 and 25 epochs long. The simple NN with raw data almost reached 90% accuracy upon validation after around 80 epochs of training. After first epoch, the validation accuracy was however only 60%.

CNN with augmented data outperformed the previous model by a large margin every epoch of training. The model reached an accuracy of 85-95% in the first 10 epochs of training and seemed to reach convergence after already 40 epochs of training. The last 60 epochs of training the CNN didn’t improve the results drastically.

5 10 15 20 25 30 35 40 simple NN with raw data and 10 epochs

simple NN with raw data and 25 epochs simple NN with raw data and 100 epochs simple NN with augmented data and 10 epochs simple NN with augmented data and 25 epochs CNN with augmented data and 25 epochs CNN with augmented data and 100 epochs

38.63

Different neural network models in comparison

Figure 19. Different neural network models in comparison based on the error rate achieved on test data after fully training the model.

After fully training all models, they were tested on test dataset and error rate recorded.

The worst neural network model using simple NN with raw data and only 10 epochs of training achieved an error rate of 38%. Then, after changing the neural network model step by step to finally use convolutional layer and augmented data with 100 epochs of training managed to bring the error rate down to 8%. The success criterion we defined to be the neural network reaching an error rate of under 10% on test images. The error rate hereby is defined as the sum of false positives and false negatives divided by the total number of samples. As seen from figure 19 the error rate of the best model is 8.05%, which is lower than what was noted in success criterion. We consider this result a success. It has become clear that to reach error rate even lower than 8% with a problem as complex as classifying human body poses, thousands upon thousands of test images are required and most likely key-point detection must be incorporated into the model.

4.5 Conclusion

This chapter illustrates details of the algorithms and methods used during experimentation.

The chapter also covers the reasoning behind parameters used for each algorithm and method. The chapter explains in detail the different neural network models tested and the results achieved. Through extensive parameter optimization in all steps of methodology, we have achieved 8.05% error rate with detecting human body pose from a single image obtained through a low-cost camera.

5 Conclusion

5.1 Conclusion

Computer vision became under more active research in the late 1990s and it is still an actively researched field with many challenges. In the 2000s researchers started to actively combine machine learning with computer vision to try to solve different challenges like object recognition. An active challenge in this category in human body poses recognition. A solution for human body pose recognition that is both fast and very accurate would solve many real-life problems like detecting distracted drivers.

In this thesis, we presented techniques capable of detecting and recognizing human body poses using a neural network with class-based data augmentation. The approach includes three major steps that are related to the detection of the human, extraction of the human silhouette and classification and recognition of the human body pose. The proposed framework uses pre-trained SVM for human detection, GrabCut algorithm for human silhouette extraction and for human body pose classification a CNN that has been trained with an augmented dataset of silhouettes.

The results obtained by our method are very encouraging since we produced accuracy over 90% with the best neural network model. Best neural network model was created by using a CNN that was trained with an augmented dataset consisting of 2260 silhouettes created by the detection and extraction steps. The CNN was trained to detect if a human was walking or standing still based on the silhouette so the dataset had over 1000 silhouette examples per class. The trained model of the CNN with forward and backward propagation achieved an error rate of 8%.

The recommendations for improving the performance of the classifier are described in next section. The recommendations need to be further checked and are therefore introduced under the subsection of future work.

5.2 Future work

As future work, we should increase the number of the images and the classes used in our dataset, which will have an impact on the recognition. In addition, through this investigation, we came to the conclusion that deep learning techniques heavily rely on the input parameters. By tuning the CNN input parameters and layers, the accuracy might be improved. The future aim should be to bring the human body pose recognition accuracy over 99%.

References

[1] Graz-01 persons dataset. http://www-old.emt.tugraz.at/~pinz/data/GRAZ_

01/persons.zip. Accessed: 2017-12-01.

[2] Intel hog descriptor example. https://software.intel.com/en-us/

ipp-dev-reference-histogram-of-oriented-gradients-hog-descriptor.

Accessed: 2018-02-01.

[3] Robustmatting trimap example. http://www.juew.org/projects/

RobustMatting/index.html. Accessed: 2018-02-05.

[4] Anelia Angelova, Alex Krizhevsky, Vincent Vanhoucke, Abhijit S Ogale, and Dave Ferguson. Real-time pedestrian detection with deep network cascades. InBMVC, volume 2, page 4, 2015.

[5] Yuri Boykov and Marie-Pierre Jolly. Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. InIEEE Internation Conference on Computer Vision., volume 1, pages 105–112. IEEE, 2001.

[6] Gary Bradski and Adrian Kaehler. Opencv. Dr. Dobb’s journal of software tools, 3, 2000.

[7] Leo Breiman. Random forests. Machine Learning, 45(1):5–32, Oct 2001.

[8] Shinko Yuanhsien Cheng and Mohan M Trivedi. Turn-intent analysis using body pose for intelligent driver assistance.IEEE Pervasive Computing, 5(4):28–37, 2006.

[9] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human de-tection. InComputer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893. IEEE, 2005.

[10] A. Derbel, D. Vivet, and B. Emile. Access control based on gait analysis and face recognition. Electronics Letters, 51(10):751–752, 2015.

[11] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645, 2010.

[12] Alvaro Garcia-Martin and Jose M. Martinez. Robust real time moving people detection in surveillance scenarios. ResearchGate, 2010.

[13] Alexander Gepperth, Michael Garcia Ortiz, and Bernd Heisele. Real-time pedes-trian detection and pose classification on a gpu. In Intelligent Transportation Systems-(ITSC), 2013 16th International IEEE Conference on, pages 348–353.

IEEE, 2013.

[14] Ben T. Grys, Data S. Lo, Nil Sahin, Oren Z. Kraus, Quaid Morris, Charles Boone, and Brenda J. Andrews. Machine learning and computer vision approaches for phenotypic profiling. Journal of Cell Biology, 2016.

[15] Joko Hariyono and Kang-Hyun Jo. Detection of pedestrian crossing road: A study on pedestrian pose recognition. Neurocomputing, 234:144–153, 2017.

[16] Tushar Jain. Automation and integration of industries through computer vision sys-tems.International Journal of Information and Computation Technology, 3(9):963–

970, 2013.

[17] Nataraj Jammalamadaka, Andrew Zisserman, and CV Jawahar. Human pose search using deep networks. Image and Vision Computing, 59:31–43, 2017.

[18] Tulga Kalayci and Ozcan Ozdamar. Wavelet preprocessing for automated neural network detection of eeg spikes. IEEE engineering in medicine and biology magazine, 14(2):160–166, 1995.

[19] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. pages 1097–1105, 2012.

[20] AS Kulkarni and SB Shinde. Monitoring driver distraction in real time using computer vision system. 2017.

[21] Arzoo Lakhwani, Krupali Shah, Ankita Vaghela, Disha Panchal, and Saurabh Rathod. Review on basics of computer vision and its applications. Journal of Computational Biology, 6(2):33–40, 2017.

[22] Yann Lecun, Leon Bottou, Y Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. 86:2278 – 2324, 12 1998.

[23] Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers:

A comparison of logistic regression and naive bayes. pages 841–848, 2002.

[24] Michael P. Pounds. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. GigaScience, 6:1–10, 2017.

[25] Grégory Rogez, Jonathan Rihan, Srikumar Ramalingam, Carlos Orrite, and Philip HS Torr. Randomized trees for human pose detection. In Computer Vi-sion and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8.

IEEE, 2008.

[26] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. InACM transactions on graphics (TOG), volume 23, pages 309–314. ACM, 2004.

[27] Tom Schaul, Justin Bayer, Daan Wierstra, Yi Sun, Martin Felder, Frank Sehnke, Thomas R ´C1/4ckstie ´C, and J ´C1/4rgen Schmidhuber. Pybrain. Journal of Machine Learning Research, 11(Feb):743–746, 2010.

[28] E. Shelhamer, J. Long, and T. Darrell. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):640–651, April 2017.

[29] Arti Singh, Baskar Ganapathysubramanian, Asheesh Singh, and Soumik Sarkar.

Machine learning for high-throughput stress phenotyping in plants. 21, 12 2015.

[30] Roberto Vezzani and Rita Cucchiara. Annotation collection and online performance evaluation for video surveillance: The visor project. IEEE Xplore, 4:227–234, 2008.

[31] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik. Feature selection for svms. pages 647–653, 2000.

[32] Peter Wilf, Sharat Chikkerur, Stefan Little, Scott Wing, and Thomas Serre. Leaf architecture: Computer vision cracks the leaf code. 07 2013.

[33] Bo Wu and R Nevatia. Detection of multiple, partially occluded humans in a single image by bayesian combination of edgelet part detectors. Proceedings of the IEEE International Conference on Computer Vision, 1:90 – 97, 2005.

[34] Dao-Xun Xia, Song-Zhi Su, Li-Chuan Geng, Guo-Xi Wu, and Shao-Zi Li. Learning rich features from objectness estimation for human lying-pose detection. Multime-dia Systems, 23(4):515–526, 2017.

[35] Fengliang Xu and Kikuo Fujimura. Human detection using depth and gray images.

pages 115– 121, 08 2003.

[36] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013.

[37] Baohua Zhang, Wenqian Huang, Jiangbo Li, Chunjiang Zhao, Shuxiang Fan, Jitao Wu, and Chengliang Liu. Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: A review. Food Research International, 62:326–343, 2014.

[38] Liang Zhao and Charles E Thorpe. Stereo-and neural network-based pedestrian detection. IEEE Transactions on intelligent transportation systems, 1(3):148–154, 2000.

[39] Jian Zhou and Olga G Troyanskaya. Predicting effects of noncoding variants with deep learning-based sequence model. 12, 08 2015.

Appendix I. License

Non-exclusive licence to reproduce thesis and make thesis public

I,Karl-Kristjan Luberg,

1. herewith grant the University of Tartu a free permit (non-exclusive licence) to:

1.1 reproduce, for the purpose of preservation and making available to the public, including for addition to the DSpace digital archives until expiry of the term of validity of the copyright, and

1.2 make available to the public via the web environment of the University of Tartu, including via the DSpace digital archives until expiry of the term of validity of the copyright,

of my thesis

Human Body Poses Recognition Using Neural Networks with Class Based Data Augmentation

supervised by Dr Amnir Hadachi and Mr Artjom Lind 2. I am aware of the fact that the author retains these rights.

3. I certify that granting the non-exclusive licence does not infringe the intellectual property rights or rights arising from the Personal Data Protection Act.

Tartu, 21.05.2018