{"id":24191,"date":"2023-09-16T12:59:40","date_gmt":"2023-09-16T03:59:40","guid":{"rendered":"https:\/\/ircn.jp\/?p=24191"},"modified":"2023-09-19T15:29:23","modified_gmt":"2023-09-19T06:29:23","slug":"20230915_kenichi_ohki","status":"publish","type":"post","link":"https:\/\/ircn.jp\/en\/pressrelease\/20230915_kenichi_ohki","title":{"rendered":"Brain inspires more robust AI --A new technique to protect sensitive AI-based applications from attackers--"},"content":{"rendered":"<p>Most artificially intelligent systems are based on neural networks, algorithms inspired by biological neurons found in the brain. These networks can consist of multiple layers, with inputs coming in one side and outputs going out of the other. The outputs can be used to make automatic decisions, for example, in driverless cars. Attacks to mislead a neural network can involve exploiting vulnerabilities in the input layers, but typically only the initial input layer is considered when engineering a defense. For the first time, researchers augmented a neural network\u2019s inner layers with a process involving random noise to improve its resilience.<\/p>\n<p>Artificial intelligence (AI) has become a relatively common thing; chances are you have a smartphone with an AI assistant or you use a search engine powered by AI. While it\u2019s a broad term that can include many different ways to essentially process information and sometimes make decisions, AI systems are often built using artificial neural networks (ANN) analogous to those of the brain. And like the brain, ANNs can sometimes get confused, either by accident or by the deliberate actions of a third party. Think of something like an optical illusion \u2014 it might make you feel like you are looking at one thing when you are really looking at another.<\/p>\n<p>The difference between things that confuse an ANN and things that might confuse us, however, is that some visual input could appear perfectly normal, or at least might be understandable to us, but may nevertheless be interpreted as something completely different by an ANN.<br \/>\nA trivial example might be an image-classifying system mistaking a cat for a dog, but a more serious example could be a driverless car mistaking a stop signal for a right-of-way sign. And it\u2019s not just the already controversial example of driverless cars; there are medical diagnostic systems, and many other sensitive applications that take inputs and inform, or even make, decisions that can affect people.<\/p>\n<p>As inputs aren\u2019t necessarily visual, it\u2019s not always easy to analyze why a system might have made a mistake at a glance. Attackers trying to disrupt a system based on ANNs can take advantage of this, subtly altering an anticipated input pattern so that it will be misinterpreted, and the system will behave wrongly, perhaps even problematically. There are some defense techniques for attacks like these, but they have limitations. Recent graduate Jumpei Ukita and Professor Kenichi Ohki from the Department of Physiology at the University of Tokyo Graduate School of Medicine devised and tested a new way to improve ANN defense.<\/p>\n<p>\u201cNeural networks typically comprise layers of virtual neurons. The first layers will often be responsible for analyzing inputs by identifying the elements that correspond to a certain input,\u201d said Ohki. \u201cAn attacker might supply an image with artifacts that trick the network into misclassifying it. A typical defense for such an attack might be to deliberately introduce some noise into this first layer. This sounds counterintuitive that it might help, but by doing so, it allows for greater adaptations to a visual scene or other set of inputs. However, this method is not always so effective and we thought we could improve the matter by looking beyond the input layer to further inside the network.\u201d<\/p>\n<p>Ukita and Ohki aren\u2019t just computer scientists. They have also studied the human brain, and this inspired them to use a phenomenon they knew about there in an ANN. This was to add noise not only to the input layer, but to deeper layers as well. This is typically avoided as it\u2019s feared that it will impact the effectiveness of the network under normal conditions. But the duo found this not to be the case, and instead the noise promoted greater adaptability in their test ANN, which reduced its susceptibility to simulated adversarial attacks.<\/p>\n<p>\u201cOur first step was to devise a hypothetical method of attack that strikes deeper than the input layer. Such an attack would need to withstand the resilience of a network with a standard noise defense on its input layer. We call these feature-space adversarial examples,\u201d said Ukita. \u201cThese attacks work by supplying an input intentionally far from, rather than near to, the input that an ANN can correctly classify. But the trick is to present subtly misleading artifacts to the deeper layers instead. Once we demonstrated the danger from such an attack, we injected random noise into the deeper hidden layers of the network to boost their adaptability and therefore defensive capability. We are happy to report it works.\u201d<\/p>\n<p>While the new idea does prove robust, the team wishes to develop it further to make it even more effective against anticipated attacks, as well as other kinds of attacks they have not yet tested it against. At present, the defense only works on this specific kind of attack.<\/p>\n<p>\u201cFuture attackers might try to consider attacks that can escape the feature-space noise we considered in this research,\u201d said Ukita. \u201cIndeed, attack and defense are two sides of the same coin; it\u2019s an arms race that neither side will back down from, so we need to continually iterate, improve and innovate new ideas in order to protect the systems we use every day.\u201d<\/p>\n<p>&nbsp;<\/p>\n<div style=\"text-align: center;\"><img decoding=\"async\" src=\" https:\/\/ircn.jp\/wp-content\/uploads\/2023\/09\/7d6fddc4d5f4457e598413974089fe73.png\" alt=\"\" style=\"max-width: 100%;\" \/><\/div style=\"text-align: center;\">\n<div style=\"padding:0px 60px\">\n<p><strong>Bird or monkey?<\/strong> To our eyes the input images x1 and x2 look the same, but hidden features nudge a typical neural network to classify this bird image as a monkey by mistake. It\u2019s said the images are distant at the input space, but close in the hidden-layer space. The researchers aimed to close this exploit.<br \/>\n\u00a92023 Ohki & Ukita CC-BY<\/div>\n<p><\/br><\/p>\n<div style=\"text-align: center;\"><img decoding=\"async\" src=\"https:\/\/ircn.jp\/wp-content\/uploads\/2023\/09\/7adfed136bf50c6b1f5c23f5919c3a0a.png\" alt=\"\" style=\"max-width: 100%;\" \/><\/div style=\"text-align: center;\">\n<div style=\"padding:0px 60px\">\n<p><strong>Is it a bird? Is it a plane? <\/strong> This is a sample of images the researchers generated for their simulated attack prior to running their new defense method. The x1 images were classified correctly, the x2 images are the adversarial examples that tricked an undefended network into classifying them wrongly.<br \/>\n\u00a92023 Ohki & Ukita CC-BY\n<\/div>\n<p><\/br><br \/>\nJournal article: Jumpei Ukita and Kenichi Ohki. \u201c<em>Adversarial attacks and defenses using feature-space stochasticity<\/em>\u201d, Neural Networks, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608023004422?via%3Dihub\"><font color=blue>DOI: 10.1016\/j.neunet.2023.08.022<\/a><\/font color=blue><br \/>\nURL: <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608023004422?via%3Dihub\"><font color=blue>https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608023004422?via%3Dihub<\/font color=blue><\/a><\/p>\n<p>Funding:<br \/>\nThis work was supported by Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain\/MINDS) from Japan Agency for Medical Research and Development (AMED) (14533320, JP16dm0207034, JP20dm0207048 to K.O.); CREST-JST (JPMJCR22P1 to K.O.); Institute for AI and Beyond (to K.O.); JSPS KAKENHI (25221001, 19H05642, 20H05917, to K.O.); Takeda Science Foundation (to J.U.); and Masayoshi Son Foundation (to J.U.).<\/p>\n<p>Departmental links:<br \/>\nOhki Lab -  <a href=\"https:\/\/physiol1.m.u-tokyo.ac.jp\/ern24596\/en\/\"><font color=blue>https:\/\/physiol1.m.u-tokyo.ac.jp\/ern24596\/en\/<\/font color=blue><\/a><br \/>\nGraduate School of Medicine -  <a href=\"https:\/\/www.m.u-tokyo.ac.jp\/english\/\"><font color=blue>https:\/\/www.m.u-tokyo.ac.jp\/english\/<\/font color=blue><\/a><br \/>\nInternational Research Center for Neurointelligence \u2013  <a href=\"https:\/\/ircn.jp\/en\/\"><font color=blue>https:\/\/ircn.jp\/en\/<\/font color=blue><\/a><br \/>\nInstitute for AI and Beyond -  <a href=\"https:\/\/beyondai.jp\/?lang=en\"><font color=blue>https:\/\/beyondai.jp\/?lang=en<\/font color=blue><\/a><\/p>\n<p>Research contact:<br \/>\nProfessor <a href=\"https:\/\/ircn.jp\/en\/mission\/people\/kenichi_ohki\"><font color=blue>Kenichi Ohki<\/a><\/font color=blue><br \/>\nDepartment of Physiology, Graduate School of Medicine,<br \/>\nThe University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan<\/p>\n<p>Press contact:<br \/>\nMr. Rohan Mehra<br \/>\nPublic Relations Group, The University of Tokyo,<br \/>\n7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan<\/p>\n<p>About the University of Tokyo:<br \/>\nThe University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at <a href=\"https:\/\/www.u-tokyo.ac.jp\/en\/\" rel=\"noopener noreferrer\" target=\"_blank\"><font color=\"blue\">www.u-tokyo.ac.jp\/en\/<\/font color=\"blue\"><\/a> or follow us on Twitter at @UTokyo_News_en.<\/p>\n<div align=\"right\">This article is reprinted from <a href=\"https:\/\/www.u-tokyo.ac.jp\/focus\/en\/press\/z0508_00311.html\" rel=\"noopener noreferrer\" target=\"_blank\"><font color=\"blue\"><strong>UTokyo FOCUS<\/strong><\/font color=\"blue\"><\/a>.<\/div>\n<p><\/p>","protected":false},"excerpt":{"rendered":"Most artificially intelligent systems are based on neural networks, algorithms inspired by biological neurons found in the brain. These networks can consist of multiple layers, with inputs coming in o [&hellip;]","protected":false},"author":11,"featured_media":24222,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_links_to":"","_links_to_target":""},"categories":[185],"tags":[],"acf":[],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/ircn.jp\/wp-content\/uploads\/2023\/09\/29d3bc99193cb0b837fb8cc53f050ae5-4-e1695087382432.png","jetpack_shortlink":"https:\/\/wp.me\/p9Xf4o-6ib","_links":{"self":[{"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/posts\/24191"}],"collection":[{"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/comments?post=24191"}],"version-history":[{"count":56,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/posts\/24191\/revisions"}],"predecessor-version":[{"id":24271,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/posts\/24191\/revisions\/24271"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/media\/24222"}],"wp:attachment":[{"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/media?parent=24191"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/categories?post=24191"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ircn.jp\/en\/wp-json\/wp\/v2\/tags?post=24191"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}