{"id":3176,"date":"2020-01-30T10:20:39","date_gmt":"2020-01-30T10:20:39","guid":{"rendered":"https:\/\/www.gyanvihar.org\/journals\/?p=3176"},"modified":"2020-01-30T10:28:35","modified_gmt":"2020-01-30T10:28:35","slug":"comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch","status":"publish","type":"post","link":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/","title":{"rendered":"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch"},"content":{"rendered":"<p align=\"CENTER\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: large\"><b>Comparison for Image Prediction with Different Sample Count and Different Number of Epoch<\/b><\/span><\/span><\/p>\n<p align=\"CENTER\"><span style=\"font-family: Times New Roman, serif\">Kiran Choudhary<\/span><sup><span style=\"font-family: Times New Roman, serif\">1<\/span><\/sup><span style=\"font-family: Times New Roman, serif\"> and Sohit Agarwal<\/span><sup><span style=\"font-family: Times New Roman, serif\">2<\/span><\/sup><\/p>\n<p align=\"CENTER\"><sup><span style=\"font-family: Times New Roman, serif\">1,2<\/span><\/sup><span style=\"font-family: Times New Roman, serif\">Suresh Gyan Vihar University, Jaipur<\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\"><b>Abstract: <\/b><\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Image classification is a complex process that may be affected by many factors. There are supervised and unsupervised classification techniques. The emphasis is placed on the deep neural network classification approach and how this technique is used for improving classification accuracy. Neuropie (CNN) signs have been developed for implementing the process and relating to extensive development and events. CNN looks at the number one or more in establishing the nonlinear line and \/ or any of these actions. CNN is a major feature of the transparency of all intermediate and geographic areas. We provide portraits to interfaces and telephones for applications. CNN is working to fight against family members and robbers. The file size in 64&#215;64 in all cases is a lineage at all points. The designers were 4, 4, 4 in the series that were hidden from product conversion. Applications can be used by 48&#215;48 with 3600 pixels. Here we have detected the samples of cat and dogs with 2000 samples, 100 samples and 500 samples. Similarly for 2000 training samples we use two different optimizers ADAM and ADAGRAD and evaluate the performance of both filters. <\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\"><b>Keywords: Convolution neural networks, dataset, training, testing, validation error<\/b><\/span><\/span><\/p>\n<ol type=\"I\">\n<li><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\"><b>Introduction<\/b><\/span><\/span><\/li>\n<\/ol>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Image classification is a complex process that may be affected by many factors. There are supervised and unsupervised classification techniques. The emphasis is placed on the deep neural network classification [2] approach and how this technique is used for improving classification accuracy.<\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Cabral et al (2011) [1] described classification of remote sensing data is used to assign corresponding labels with respect to homogeneous characteristics of groups. The main aim of classification is to discriminate multiple objects from each other within the image. It can be said that classification divides the feature space into several classes based on a decision rule. Figure 1 shows the concept of classification of data. He learning algorithms are broadly classified into supervised and unsupervised learning techniques. The distinction is drawn from how the learner classifies data. In supervised learning, the classes are predetermined. These classes can be conceived of as a finite set, previously arrived at by a human. In practice, a certain classes of data will be labeled with these classifications. M. M. Cisse (2013) [3] reviewed the classes are then evaluated based on their predictive capacity in relation to measures of variance in the data itself. Some of the examples of supervised classification techniques are Back Propagation Network (BPN)[4], Learning Vector Quantization (LVQ)[5], Self Organizing Map (SOM)[6,7], Support Vector Machine (SVM)[8,9], etc., Here we will use the convolution neural network technique for classification.<\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">The design of the actual experience, and it is unlikely that a computer program will be analyzed by certain features of a subdivision or computer, we will encourage third parties to call this type of action. Why cannot he teach? Therefore, a person uses the use of P by a user.<\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">At present, in order to fix this task, we have a unique E, which is in view of many images. So, many of my favorite images, business contacts, and lots of times, as well as more and more time. At this time, for this topic T and E, we use a P-called P and this measurement that we use today, and there is something we deserve. So it&#8217;s the function we use here, we now find that our experience has increased, which corresponds to the number of times we translate and the number of images we were discussing here. Sin cometh down; We know that we can recognize our work and emphasize the magnitude of this work. So, as the performance increases, they become more and more accurately and, consequently, our error continues to decrease, well. Therefore, the speaker of this activity in the area T about the program challenged by this P service will be helpful and timely in this case if we are trying &#8220;for the In this regard, there is no difference in this idea in the context of the student&#8217;s education. In fact, human learning is also quite similar: as human beings, when we say we learn something, the whole task of learning is to get better results and to get more and more &#8216;knowledge. What does a person do, so with<\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">him? We get more experience and then our degree is becoming more and more high compared with a certain class of tasks. Therefore, it is the source of the general formula.<\/span><\/span><\/p>\n<ol type=\"I\">\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><b>Methodology and Work Objectives<\/b><\/span><\/span><\/p>\n<\/li>\n<\/ol>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">In our real life applications sometimes we have some images which are blurred or noisy and they are not recognizable. By choosing the appropriate deep learning tool we can predict the lost information of the image and classify it that the image belongs to which category? CNN use the following methodology to solve the problem of deep learning.. (i) <\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Dense connections (ii) Parameter allotment equivariant,<\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\"> and <\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Representations<\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">. <\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Moreover, convolution provides a means for working with inputs of variable size. We have seen the working procedure of convolution neural network. Now we are applying the CNN for the detection of Cat vs Dog. Here we have a data set of 2000 cats and dogs (1000 images of cats and 1000 images of dogs). This data set has downloaded from \u201c<\/span><\/span><span style=\"color: #000000\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Kaggle<\/span><\/span><\/span><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">\u201d. We have collect validation set of 200 samples a test samples are also of 200 size. In test dataset 100 images of cats and 100 images of dogs are there. Test criteria for image detection are average loss and accuracy. Here we are also calculating the validation loss and validation accuracy. we have detected the samples of cat and dogs with 2000 samples, 100 samples and 500 samples. Similarly for 2000 training samples we use two different optimizers ADAM and ADAGRAD and evaluate the performance of both filters.\u00a0<\/span><\/span><\/p>\n<ol start=\"2\" type=\"I\">\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><b>Convolution neural networks frame work for image detection <\/b><\/span><\/span><\/p>\n<\/li>\n<\/ol>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">We&#8217;re use convolutional neural networks (CNNs) to perform our task of image detection using deep learning. We&#8217;re going to try to create a deep learning CNN model for an old Kaggle completion called\u00a0<a href=\"https:\/\/www.kaggle.com\/c\/dogs-vs-cats\">Dogs vs Cats<\/a>. There are more than 25000 images of cats and dogs are available for training purpose and 12,500 in the test set that we have to try to label for this dissertation work. Out of which we are using a data set of 2000 samples for training purpose and choose 200 images (100 of each) for testing purpose and finally checked that how our network is performing. Design the 2D convolution neural networks having the input shape of size 64 x64 x 3 and the activation function is Relu. Choose the pooling size of 2&#215;2 and taking the max pooling. Add a second convolution neural network of size 32x3x3. In the second layer of network the activation function is Relu and the pooling sixe is 2&#215;2 remain same. Flattening is the process to flatten the CNN architecture. The output dimension of the fully connected network is the 128 and activation function is relu. Finally we take the single output and output activation function is &#8216;sigmoid&#8217;. For the compilation of the network we have \u2018adam\u2019 is the optimization algorithm and the loss function is &#8216;binary cross entropy&#8217;. We have choose the target size is (64, 64), batch size is 32 and Class mode is &#8216;binary&#8217;. Target size and test size parameters are keeping same. <\/span><\/span><\/p>\n<ol start=\"3\" type=\"I\">\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\"><b>Python libraries and setup for deep learning program<\/b><\/span><\/span><\/p>\n<\/li>\n<\/ol>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Here we are using the following python libraries.<\/span><\/span><\/p>\n<ul>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">KERAS<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">TENSORFLOW<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">THEANO<\/span><\/span><\/p>\n<\/li>\n<\/ul>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Libraries of KERAS used in program<\/span><\/span><\/p>\n<ul>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Sequential<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Convolution2D<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">MaxPooling2D<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Flatten<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Dense<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">PlotLossesKeras<\/span><\/span><\/p>\n<\/li>\n<\/ul>\n<ol start=\"4\" type=\"I\">\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\"><b>Output Result and discussions<\/b><\/span><\/span><\/p>\n<\/li>\n<\/ol>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">To run the program we have choose the 2000 samples per epoch and there are total 15 epoch in our program. For the above parameter the executed results are as follows. <\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">After completing the execution of the program the conclusive results are shown in table5.1<\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">By above table our conclusion is as follows<\/span><\/span><\/p>\n<ul>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">We choose a sample size of 2000 images<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">We choose 200 images for testing and also for validation <\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">We choose 15 epoch and choose 6000 iteration in each epochs.<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">We got the validation accuracy is 72%<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">We got the Test accuracy is 99.88%<\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">As we have reduce the sample size we found that training of the model is degraded and the testing accuracy reduced as the sample size is reduced.<\/span><\/span><\/p>\n<\/li>\n<\/ul>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Here test accuracy is much higher than the validation accuracy. Test accuracy is 99.88 % which reflect the higher degree of precision of network. It may be because of the over fitting of the network. But finally we our designed network is giving the 99.88% prediction result and it clearly classify the difference between. <\/span><\/span><\/p>\n<ol start=\"5\" type=\"I\">\n<li><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><b>Conclusion<\/b><\/span><\/span><\/li>\n<\/ol>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: small\">Test criteria for image detection are average loss and accuracy. Here we are also calculating the validation loss and validation accuracy. For input layers we choose the activation function RELU and for output layer we choose the SIGMOID as activation function. We have \u2018ADAM\u2019 is the optimization algorithm and the loss function is &#8216;binary cross entropy. After completing the execution of the program we found that test accuracy of the network is 99.88% and the validation accuracy is 72%. The output result is approximately 100% correct, which implies that our network is over fitted. But if we choose the same network for same application it will work with same accuracy. Adam is better than the ADAGRAD<\/span><\/span><\/p>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><b>References<\/b><\/span><\/span><\/p>\n<ol>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">R. S. Cabral, F. Torre, J. P. Costeira, and A. Bernardino. Matrix completion for multi-label image classification. In Advances in Neural Information Processing Systems, pages 190\u2013198, 2011. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng. Nus-wide: a real-world web image database from national university of singapore. In Proceedings of the ACM international conference on image and video retrieval, page 48. ACM, 2009. 5, <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">M. M. Cisse, N. Usunier, T. Artieres, and P. Gallinari. Robust bloom filters for large multilabel classification tasks. In Advances in Neural Information Processing Systems, pages 1851\u20131859, 2013. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">G. E. Dahl, T. N. Sainath, and G. E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8609\u20138613. IEEE, 2013. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248\u2013255. IEEE, 2009. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303\u2013 338, 2010. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in Neural Information Processing Systems, pages 2121\u20132129, 2013. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">N. Ghamrawi and A. McCallum. Collective multi-label classification. In Proceedings of the 14th ACM international conference on Information and knowledge management, pages 195\u2013200. ACM, 2005. <\/span><\/span><\/span><\/p>\n<\/li>\n<li>\n<p align=\"JUSTIFY\"><span style=\"font-family: Times New Roman, serif\"><span style=\"font-size: medium\"><span style=\"font-size: small\">Y. Gong, Y. Jia, T. Leung, A. Toshev, and S. Ioffe. Deep convolutional ranking for multilabel image annotation. arXiv preprint arXiv:1312.4894, 2013. <\/span><\/span><\/span><\/p>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Comparison for Image Prediction with Different Sample Count and Different Number of Epoch Kiran Choudhary1 and Sohit Agarwal2 1,2Suresh Gyan Vihar University, Jaipur Abstract: Image classification is a complex process that may be affected by many factors. There are supervised and unsupervised classification techniques. The emphasis is placed on the deep neural network classification approach [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[75],"tags":[],"class_list":["post-3176","post","type-post","status-publish","format-standard","hentry","category-volume-5-issue-2-2019"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>research journal - Research Journal<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch - research journal\" \/>\n<meta property=\"og:description\" content=\"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch Kiran Choudhary1 and Sohit Agarwal2 1,2Suresh Gyan Vihar University, Jaipur Abstract: Image classification is a complex process that may be affected by many factors. There are supervised and unsupervised classification techniques. The emphasis is placed on the deep neural network classification approach [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/\" \/>\n<meta property=\"og:site_name\" content=\"research journal\" \/>\n<meta property=\"article:published_time\" content=\"2020-01-30T10:20:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-01-30T10:28:35+00:00\" \/>\n<meta name=\"author\" content=\"gyanvihar3\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"gyanvihar3\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/\",\"name\":\"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch - research journal\",\"isPartOf\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#website\"},\"datePublished\":\"2020-01-30T10:20:39+00:00\",\"dateModified\":\"2020-01-30T10:28:35+00:00\",\"author\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.gyanvihar.org\/journals\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#website\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/\",\"name\":\"research journal\",\"description\":\"Research Journal\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.gyanvihar.org\/journals\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9\",\"name\":\"gyanvihar3\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g\",\"caption\":\"gyanvihar3\"},\"url\":\"https:\/\/www.gyanvihar.org\/journals\/author\/gyanvihar3\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"research journal - Research Journal","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/","og_locale":"en_US","og_type":"article","og_title":"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch - research journal","og_description":"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch Kiran Choudhary1 and Sohit Agarwal2 1,2Suresh Gyan Vihar University, Jaipur Abstract: Image classification is a complex process that may be affected by many factors. There are supervised and unsupervised classification techniques. The emphasis is placed on the deep neural network classification approach [&hellip;]","og_url":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/","og_site_name":"research journal","article_published_time":"2020-01-30T10:20:39+00:00","article_modified_time":"2020-01-30T10:28:35+00:00","author":"gyanvihar3","twitter_card":"summary_large_image","twitter_misc":{"Written by":"gyanvihar3","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/","url":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/","name":"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch - research journal","isPartOf":{"@id":"https:\/\/www.gyanvihar.org\/journals\/#website"},"datePublished":"2020-01-30T10:20:39+00:00","dateModified":"2020-01-30T10:28:35+00:00","author":{"@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9"},"breadcrumb":{"@id":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.gyanvihar.org\/journals\/comparison-for-image-prediction-with-different-sample-count-and-different-number-of-epoch\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.gyanvihar.org\/journals\/"},{"@type":"ListItem","position":2,"name":"Comparison for Image Prediction with Different Sample Count and Different Number of Epoch"}]},{"@type":"WebSite","@id":"https:\/\/www.gyanvihar.org\/journals\/#website","url":"https:\/\/www.gyanvihar.org\/journals\/","name":"research journal","description":"Research Journal","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.gyanvihar.org\/journals\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9","name":"gyanvihar3","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g","caption":"gyanvihar3"},"url":"https:\/\/www.gyanvihar.org\/journals\/author\/gyanvihar3\/"}]}},"_links":{"self":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/3176","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/comments?post=3176"}],"version-history":[{"count":6,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/3176\/revisions"}],"predecessor-version":[{"id":3178,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/3176\/revisions\/3178"}],"wp:attachment":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/media?parent=3176"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/categories?post=3176"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/tags?post=3176"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}