{"id":1285,"date":"2018-12-04T06:22:07","date_gmt":"2018-12-04T06:22:07","guid":{"rendered":"http:\/\/www.gyanvihar.org\/journals\/?p=1285"},"modified":"2019-05-27T09:02:11","modified_gmt":"2019-05-27T09:02:11","slug":"improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2","status":"publish","type":"post","link":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/","title":{"rendered":"Improved Security of Neural Cryptography Using Don\u2019t-Trust-My-Partner and Error Prediction"},"content":{"rendered":"<p style=\"text-align: center\"><strong>Robert Kozma<\/strong><\/p>\n<p style=\"text-align: center\">University of memphis<\/p>\n<p style=\"text-align: justify\"><strong>Abstract:-<\/strong>Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. The two networks will exchange their outputs (in bits) so that the key between the two communicating parties is eventually represented in the final learned weights and the two networks are said to be synchronized. Security of neural synchronization depends on the probability that an attacker can synchronize with any of the two parties during the training process, so decreasing this probability improves the reliability of exchanging their output bits through a public channel. This work proposes an\u00a0exchange technique that will disrupt the attacker confidence in the exchanged outputs during training. The algorithm is based on one party sending erroneous output bits with the other party being capable of predicting and removing this error. The proposed approach is shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.<\/p>\n<p style=\"text-align: justify\"><strong><em>Keywords:<\/em><\/strong>\u00a0\u00a0\u00a0 <strong>Cryptography,\u00a0\u00a0\u00a0\u00a0 mutual\u00a0\u00a0\u00a0\u00a0 learning,<\/strong>\u00a0\u00a0\u00a0\u00a0 <strong>neural\u00a0<\/strong><strong>cryptography,\u00a0\u00a0\u00a0 neural\u00a0 synchronization,\u00a0\u00a0\u00a0\u00a0 tree\u00a0\u00a0\u00a0\u00a0 parity\u00a0<\/strong><strong>machine.<\/strong><\/p>\n<p>I. INTRODUCTION<\/p>\n<p style=\"text-align: justify\">Neural networks (NNs) are able to solve so called non\u00a0formalized\u00a0 problems or weakly formalized\u00a0 problems\u00a0\u00a0 that\u00a0requires learning process based on a real experimental dataSupervised NNs models are trained on input\/output pairs to achieve a certain task. This training is based on adjusting the initial randomized synaptic weights by applying a\u00a0predefined learning rule. Two NNs having the same structure\u00a0and different initial synaptic weights can do the same task if\u00a0exchange technique that will disrupt the attacker confidence in the exchanged outputs during training. The algorithm is based on one party sending erroneous output bits with the other party being capable of predicting and removing this error. The proposed approach is shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.<\/p>\n<p><strong><em>Keywords:<\/em><\/strong>\u00a0\u00a0\u00a0 <strong>Cryptography,\u00a0\u00a0\u00a0\u00a0 mutual\u00a0\u00a0\u00a0\u00a0 learning,<\/strong>\u00a0\u00a0\u00a0\u00a0 <strong>neural\u00a0<\/strong><strong>cryptography,\u00a0 neural\u00a0 Synchronization,\u00a0\u00a0\u00a0\u00a0 tree\u00a0\u00a0\u00a0\u00a0 parity\u00a0<\/strong><strong>machine.<\/strong><\/p>\n<p>I. INTRODUCTION<\/p>\n<p style=\"text-align: justify\">Neural networks (NNs) are able to solve so called non\u00a0formalized problems or weakly formalized problems that\u00a0requires learning process based on a real experimental data\u00a0[1]. Supervised NNs models are trained on input\/output pairs\u00a0to achieve a certain task. This training is based on adjusting\u00a0the initial randomized synaptic weights by applying a\u00a0predefined learning rule. Two NNs having the same structure\u00a0and different initial synaptic weights can do the same task if\u00a0both are trained on the same input\/output pairs while the final\u00a0synaptic weights of the two networks need not be the same.<\/p>\n<p style=\"text-align: justify\">In fact this phenomenon is very interesting and can be\u00a0modified to achieve another goal, i.e., the two networks have<br \/>\nthe same final weights. One way to do that is for the two\u00a0networks to be presented with common input patterns while<br \/>\nbeing trained on the output of each other instead of predefined\u00a0target patterns. The applied learning rule needs to be so\u00a0efficient that the two synaptic weight vectors of the two\u00a0networks become close to each other and thus correlated.<\/p>\n<p style=\"text-align: justify\">Hence, the final two weight vectors are almost identical. The\u00a0correlation between the two weight vectors is also called the overlap. When the overlap is 100% (i.e. the\u00a0 two weight\u00a0vectors are identical) it can be said that the two networks have\u00a0synchronized with each other. An aim of cryptography is to\u00a0transmit a secret message between two partners, A and B,\u00a0while an attacker, E, who happens to access the\u00a0communication channel will not be able to figure out the\u00a0context of this message\u00a0A number of methods have been introduced to achieve this\u00a0goal [2][3][4][5]. In 1976 Diffie and Hellman developed a\u00a0mechanism based on number theory by which a secret key\u00a0can be exchanged by two parties over a public channel which\u00a0is accessible to any attacker [2][3]. Alternatively, two\u00a0networks trained on their outputs are able to achieve the same\u00a0objective by means of mutual learning [6]. The most common\u00a0model used in neural cryptography is known as the Tree\u00a0parity Machine (TPM) since it keeps the state of the two\u00a0parties secret, and thus it is more secure than using simple\u00a0network. The aim of this work is to introduce a mechanism to\u00a0improve the security of the mutual learning process, so that\u00a0the attacker find it more difficult in listening to the\u00a0communication between the two parties during the period in\u00a0which they increase their weight vectors overlap. The paper is\u00a0organized as follows. Section II presents an introduction to\u00a0mutual learning in both a simple network and TPM. Section\u00a0III shows a summary to most known attack against mutual\u00a0learning. In section IV, a\u00a0brief explanation for neural synchronization with feedback [7]\u00a0is presented. This method was developed to improve the\u00a0security of the mutual learning process for the TPM model. In\u00a0section V, the DTMP (Don\u2019t Trust My Partner) with error\u00a0prediction approach is proposed to improve the security of\u00a0exchanging the two parties output bits. Section VI presents\u00a0the possible break-on scenarios against the proposed method.<\/p>\n<p>II. PAGE LAYOUT<\/p>\n<p>INTRODUCTION SYNCHRONIZATION<\/p>\n<p style=\"text-align: justify\">between different entities is a known phenomenon that exists in different physical and biological systems. Synchronization in biological systems can be found in the behaviour of Southeast Asian fireiles [1], which is a biological type of phase synchronization of multiple oscillators. Also, another type of synchronization exists in chaotic systems [2], where the synchronization process in artificial neural networks (NNs) can be exploited in securing information transmission.<\/p>\n<p style=\"text-align: justify\">This paper presents three algorithms to enhance the security of neural cryptography in such a way that the attacker faces difficulties in trusting the transmitted information on the public channel. The proposed algorithms tamper with the listening process, which is the basic mechanism the attacker depends on to break into the system. This paper is organized as follows. Section II presents the mutual learning method for both simple networks and the TPM. Section III summarizes the most known attacks against mutual learning. In Section IV, the Do not Trust My Partner (DTMP) with error\u00a0\u00a0prediction approach is proposed to improve the security of exchanging the output bits of two communicating parties. Section V presents the possible breakin scenarios against the proposed method. In Section VI, the performance of the proposed algorithm is analyzed. Section VII presents simulation and experimental results for the DTMP algorithm. Section VIII introduces the Synchronization with Common Secret Feedback (SCSFB) algorithm as a modification for the synchronization with feedback algorithm. In Section IX, the two proposed approaches, i.e., DTMP and SCSFB, are combined to provide for additional secure communication.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1309 aligncenter\" src=\"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-54.jpg\" alt=\"\" width=\"364\" height=\"298\" \/><\/p>\n<p style=\"text-align: justify\">MUTUAL LEARNING IN TPMS The basic building block for the mutual learning process is a single perception. Fig. 1 depicts two communicating perceptions having different initial weights w<sup>A\/B<\/sup>, and receiving the same random input x at every training step. The mutual learning process is based on exchanging output bits \u03c3 A\/B between the two perceptions. The output \u03c3 is defined as<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1311 aligncenter\" src=\"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-55.jpg\" alt=\"\" width=\"346\" height=\"87\" \/><\/p>\n<p>at the end of training step t, the weight vectors w are updated using the following learning rule [10]:<\/p>\n<p>w<sup>A<\/sup>(t +1)=w<sup>A<\/sup>(t)+ \u03b7\/ N x(t)\u03c3 <sup>B<\/sup>(t)\u1d13(\u2212\u03c3 <sup>A<\/sup>(t)\u03c3 <sup>B<\/sup>(t))<\/p>\n<p>w<sup>B<\/sup>(t +1)=w<sup>B<\/sup>(t)+ \u03b7\/ N x(t)\u03c3 <sup>A<\/sup>(t) \u1d13(\u2212\u03c3 <sup>A<\/sup>(t)\u03c3 <sup>B<\/sup>(t))\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (ii)<\/p>\n<p style=\"text-align: justify\">where \u03b7 is a suitable learning rate and \u1d13 is the step function. Clearly, the weights will be updated only if the two output bits \u03c3 A and \u03c3 B disagree. After each weight update, the weight vectors of the two networks are kept normalized. If the learning rate exceeds a critical value \u03b7c =1.816, the two weight vectors will satisfy the condition w<sup>A<\/sup> =\u2212w<sup>B<\/sup> [10]. There are some restrictions on both the input and weight vector generation mechanisms in order to achieve full synchronization. The input pattern x has to be an N-dimensional vector with its components being generated from zeromean unit-variance Gaussian distribution (continuous values). Also, the weight vector w is an N-dimensional vector with continuous components which should be normalized,\u00a0i.e., ||w<sup>T<\/sup>.w=1||, since only normalized weights can synchronize.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1316 aligncenter\" src=\"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-56.jpg\" alt=\"\" width=\"339\" height=\"203\" \/><\/p>\n<p>to synchronize with probability of 99% [10]. Consequently, another structure should be developed to hide the internal state of each party so that it cannot be reproduced from the transmitted information. A TPM structure that consists of K perceptions (Fig. 2), each with an output bit \u03c3k, is a candidate for improving the security of the mutual learning process and can be defined as<\/p>\n<p>\u03c3k =sign(wT\u00a0k .xk)\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(3)<br \/>\nThe output of the TPM is<br \/>\n\u03c4 =\u03c0 K\u00a0k=1\u03c3k\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(4)<\/p>\n<p style=\"text-align: justify\">The continuous type of input and weight vector components are not suitable for cryptographic application. When only digital signaling (0s and 1s) is permitted, the input and weight components should be drawn from a discrete distribution rather than a continuous one. Bipolar input pattern sx\u2208{\u2212 1,1}N and discrete weight vector wk, j \u2208 {\u2212 L,\u2212L +1,&#8230;L \u22121, L}N will be used here, where L is an integer value chosen by the designer to represent the synaptic depth of the network<\/p>\n<p>[10]. The two partners who need to share a common key will\u00a0maintain two identical TPMs.<\/p>\n<p>REFERENCES<br \/>\n[1] A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: A Universal\u00a0Concept in Nonlinear Sciences. Cambridge, U.K.: Cambridge Univ. Press,\u00a02003.<\/p>\n<p style=\"text-align: justify\">[2] C.-M. Kim, S. Rim, and W.-H. Kye, \u201cSequential synchronization of\u00a0chaotic systems with an application to communication,\u201d Phys. Rev. Lett., vol.\u00a088, no. 1, pp. 014103-1\u2013014103-4, Dec. 2001.<br \/>\n[3] A. I. Galushkin, Neural Network Theory. New York: Springer-Verlag,\u00a02007. [4] G. P\u00f6lzlbauer, T. Lidy, and A. Rauber, \u201cDecision manifolds\u2014a\u00a0supervised learning algorithm based on self-organization,\u201d IEEE Trans.\u00a0Neural Netw., vol. 19, no. 9, pp. 1518\u20131530, Sep. 2008.<\/p>\n<p>[5] I. Kanter, W. Kinzel, and E. Kanter, \u201cSecure exchange of information by\u00a0synchronization of neural networks,\u201d Europhys. Lett., vol. 57, no. 1, pp. 141\u2013\u00a0147, 2002<\/p>\n<p>[6] W. Diffie and M. Hellman, \u201cNew directions in cryptography,\u201d IEEE\u00a0Trans. Inform. Theory, vol. 22, no. 6, pp. 644\u2013654, Nov. 1976.<\/p>\n<p>[7] A. J. Menezes, S. A. Vanstone, and P. C. Van Oorschot, Handbook of\u00a0Applied Cryptography. Boca Raton, FL: CRC Press, 1996.<\/p>\n<p>[8] B. Schneier, Applied Cryptography: Protocols, Algorithms, and Source\u00a0Code in C, 2nd ed. New York: Wiley, 1995.<\/p>\n<p>[9] W. Stallings, Cryptography and Network Security: Principles and\u00a0Practice. Upper Saddle River, NJ: Pearson, 2002.<\/p>\n<p>[10] E. Klein, R. Mislovaty, I. Kanter, A. Ruttor, and W. Kinzel,\u00a0\u201cSynchronization of neural networks by mutual learning and its application to\u00a0cryptography,\u201d in Advances in Neural Information Processing Systems 17, L.\u00a0K. Saul, Y. Weiss, and L. Bottou, Eds. Cambridge, MA: MIT Press, 2005,\u00a0pp. 689\u2013696<\/p>\n<p style=\"text-align: justify\">AUTHORS INFORMATION<br \/>\nAhmed M. Allam received the Graduate degree from the Computer and\u00a0Systems Engineering Department, Ain Shams University, Cairo, Egypt, in\u00a02008. He is currently pursuing the Masters degree in the same university. He\u00a0joined Mentor Graphics Egypt, Cairo, as a Quality Assurance Engineer in\u00a02008. In 2010, he joined a Synopsys partner, Swiftronix, Cairo, as a Digital\u00a0Design and Verification Engineer. His current research interests include\u00a0computational intelligence, digital design, and quantum computing,\u00a0cryptography, and quantum cryptography<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Robert Kozma University of memphis Abstract:-Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. The two networks will exchange their outputs (in bits) so that the key between the two communicating parties is eventually represented in the final learned weights and the two networks are said [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26,31],"tags":[],"class_list":["post-1285","post","type-post","status-publish","format-standard","hentry","category-international-journal-of-converging-technologies-management","category-volume-3-issue-1-2017"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>research journal - Research Journal<\/title>\n<meta name=\"description\" content=\"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Improved Security Of Neural Cryptography Using Don\u2019t trust my partner\" \/>\n<meta property=\"og:description\" content=\"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/\" \/>\n<meta property=\"og:site_name\" content=\"research journal\" \/>\n<meta property=\"article:published_time\" content=\"2018-12-04T06:22:07+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-05-27T09:02:11+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-54.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"364\" \/>\n\t<meta property=\"og:image:height\" content=\"298\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"gyanvihar3\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Improved Security Of Neural Cryptography Using Don\u2019t trust my partner\" \/>\n<meta name=\"twitter:description\" content=\"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"gyanvihar3\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/\",\"name\":\"Improved Security Of Neural Cryptography Using Don\u2019t trust my partner\",\"isPartOf\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-54.jpg\",\"datePublished\":\"2018-12-04T06:22:07+00:00\",\"dateModified\":\"2019-05-27T09:02:11+00:00\",\"author\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9\"},\"description\":\"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU\",\"breadcrumb\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#primaryimage\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-54.jpg\",\"contentUrl\":\"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-54.jpg\",\"width\":364,\"height\":298},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.gyanvihar.org\/journals\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Improved Security of Neural Cryptography Using Don\u2019t-Trust-My-Partner and Error Prediction\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#website\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/\",\"name\":\"research journal\",\"description\":\"Research Journal\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.gyanvihar.org\/journals\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9\",\"name\":\"gyanvihar3\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g\",\"caption\":\"gyanvihar3\"},\"url\":\"https:\/\/www.gyanvihar.org\/journals\/author\/gyanvihar3\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"research journal - Research Journal","description":"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/","og_locale":"en_US","og_type":"article","og_title":"Improved Security Of Neural Cryptography Using Don\u2019t trust my partner","og_description":"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU","og_url":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/","og_site_name":"research journal","article_published_time":"2018-12-04T06:22:07+00:00","article_modified_time":"2019-05-27T09:02:11+00:00","og_image":[{"width":364,"height":298,"url":"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-54.jpg","type":"image\/jpeg"}],"author":"gyanvihar3","twitter_card":"summary_large_image","twitter_title":"Improved Security Of Neural Cryptography Using Don\u2019t trust my partner","twitter_description":"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU","twitter_misc":{"Written by":"gyanvihar3","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/","url":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/","name":"Improved Security Of Neural Cryptography Using Don\u2019t trust my partner","isPartOf":{"@id":"https:\/\/www.gyanvihar.org\/journals\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#primaryimage"},"image":{"@id":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#primaryimage"},"thumbnailUrl":"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-54.jpg","datePublished":"2018-12-04T06:22:07+00:00","dateModified":"2019-05-27T09:02:11+00:00","author":{"@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9"},"description":"Neural cryptography deals with the problem of key exchange using the mutual learning concept between two neural networks. Journal. SGVU","breadcrumb":{"@id":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#primaryimage","url":"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-54.jpg","contentUrl":"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-54.jpg","width":364,"height":298},{"@type":"BreadcrumbList","@id":"https:\/\/www.gyanvihar.org\/journals\/improved-security-of-neural-cryptography-using-dont-trust-my-partner-and-error-prediction-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.gyanvihar.org\/journals\/"},{"@type":"ListItem","position":2,"name":"Improved Security of Neural Cryptography Using Don\u2019t-Trust-My-Partner and Error Prediction"}]},{"@type":"WebSite","@id":"https:\/\/www.gyanvihar.org\/journals\/#website","url":"https:\/\/www.gyanvihar.org\/journals\/","name":"research journal","description":"Research Journal","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.gyanvihar.org\/journals\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9","name":"gyanvihar3","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g","caption":"gyanvihar3"},"url":"https:\/\/www.gyanvihar.org\/journals\/author\/gyanvihar3\/"}]}},"_links":{"self":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/1285","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/comments?post=1285"}],"version-history":[{"count":2,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/1285\/revisions"}],"predecessor-version":[{"id":1325,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/1285\/revisions\/1325"}],"wp:attachment":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/media?parent=1285"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/categories?post=1285"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/tags?post=1285"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}