Deep Neural Networks
Volltext
ÄHNLICHE DOKUMENTE
Bei HexConv werden auch hexagonale Filter eingesetzt, dass heißt die Gewichte sind nicht mehr in rechteckiger Form als zweidimensionale Tensor hinterlegt, sondern in einer
Keywords: deep neural networks, activation maximization, sensitivity analysis, Taylor decomposition, layer-wise relevance
Here, we present the new tool Interactive Feature Localization in Deep neural networks (IFeaLiD) which provides a novel visualization approach to convolutional neural network
Recently deep neural networks have transformed the fields of handwriting recognition, speech recognition [4], large scale image recognition [5] and video analysis [6,7], and are
Houston data set, the recurrent network with the proposed PRetanh can quickly converge to the error of 0.401 after 100 iterations. In the same conditions, tanh can only yield
We propose a simple RGB based method for recognition of rigid but also deformable objects and synthesize images for training a neural network.. We then test this method by training
The aim of this thesis is to accurately and efficiently classify household appliances in small time intervals (window) from the power consumption data of household appliances using
So fällt anhand Abbildung 22 (letzte Iteration) auf, dass ein Faltungsnetz mit der höchsten Erkennungsrate nicht unbedingt das fitteste ist. Im Allgemeinen kann es sogar