Simulation of the Spiking Neural Network based on Practical Memristor

. In order to gain a better understanding of the brain and explore biologically-inspired computation, significant attention is being paid to research into the spike-based neural computation. Spiking neural network (SNN), which is inspired by the understanding of observed biological structure, has been increasingly applied to pattern recognition task. In this work, a single layer SNN architecture based on the characteristics of spiking timing dependent plasticity (STDP) in accordance with the actual test of the device data has been proposed. The device data is derived from the Ag/GeSe/TiN fabricated memristor. The network has been tested on the MNIST dataset, and the classification accuracy attains 90.2%. Furthermore, the impact of device instability on the SNN performance has been discussed, which can propose guidelines for fabricating memristors used for SNN architecture based on STDP characteristics.


Introduction
Brain-inspired computing is an emerging field, which can achieve artificial intelligence by means of simulating neural network of human's brain, as it can extend the capabilities of information technology beyond the Von Neumann paradigm [1,2]. In biological systems, synapses are the bridges between neurons [1,3], and they can change their intensity to enhance (synaptic potentiation) or weaken (synaptic depression) the connection between two neurons. This mechanism, used by the brain in learning processes, is called synaptic plasticity [2].
Spiking neural network (SNN) is an important way to achieve brain-inspired computing. And the presence of memristors provides a new physical basis for improving synaptic simulation levels [4][5][6], and then the possibility of further development of brain-like computing. The nanoscale size of the memristors and low energy consumption can enable high-density distribution of the human brain [7][8][9][10]. What's more, memristor conductance can be tuned according to the delay time between two spikes successively applied to the device terminals. This regulation of memristor conductance implements the typical biological learning process named Spike Time-Dependent-Plasticity (STDP) [11][12][13]. Constructing a SNN using memristors as synapses can take full advantage of the characteristics of memristors for high-density distribution and low power consumption. And the neuron should attain functions such as leaky-integrate-and-fire (LIF), inhibitory period, bidirectional transmission and lateral inhibition [14][15].
In recent years many SNN architectures were developed for pattern recognition tasks which relied on different kinds of models of STDP [16]. A Poissonian spike presentation of the MNIST dataset achieves good performance of classification [17]. A computer simulation achieved classification task on MNIST and had the potential to be implemented using memristors [18], and the immune to device variations was simulated [19]. However, many researchers discussed the SNN architectures based on perfect synaptic devices without considering the characteristics of practical devices and the device yield. So the research on SNN architecture considering practical device characteristics has decent value.
In this work, we firstly analyze the STDP characteristics of Ag/GeSe/TiN device and extract a synaptic data model on it. Then a SNN architecture using the data is proposed, and its performance is demonstrated on the MNIST dataset. What's more, the impact of device variability is tested on the 25*10-scale SNN architecture.

Synapse Device
In order to better emulate the STDP characteristics, the Ag/GeSe/TiN memristor is fabricated. First, the Ag/GeSe/TiN device has a 5 μm×5 μm effective area. Vertical lines of TiN (40 nm) acting as bottom electrode were deposited on the SiO2/Si substrate by magnetron sputtering after the first lithography process. Then the solid electrolyte GeSe (50 nm) was grown by magnetron sputtering acting as resistive switching layer after the second lithography process. The GeSe layer was patterned to uncover the TiN electrode. After lift-off process, the In order to ensure the validity of the device, all the electrical characteristics of the devices were tested on the Keithley 4200 semiconductor parameter analyser. The ground was applied to the TiN electrode while the Ag electrode was tied to the voltage. The applied signals were equal to the subtraction of pre-spike and post-spike.
To verify the STDP characteristics in Ag/GeSe/TiN memristor, a pair of positive (0.5V,60μs) and negative (-0.5V,60μs) pulses was adopted as spike signal, and we applied the spike signal to the two terminals of the memristor with different Δt. The experiment results are shown in Figure 1. Here Δt represents the interval time between pre-and post-spike. ΔG represents the conductance difference in these two moments. It can be seen that the Ag/GeSe/TiN memristor conductance increases (potentiation, ΔG>0) when the pre-spike appears before the post-spike (Δt>0). And Vice versa, the memristor conductance decrease (depression, ΔG<0) when the pre-spike follows the post-spike (Δt<0). What's more, the shorter Δt, the greater the change in the device conductance. It indicates that the Ag/GeSe/TiN device has the STDP characteristics. With 200 pulses, a conductance window of 10 can be achieved, which has been demonstrated to be sufficient for neuromorphic applications [20,21] Based on the experiment result data of the Ag/GeSe/TiN device, we construct a related synaptic data model, which is used as the synapses of the SNN architecture.

Architecture of SNN
According to practical memristor characteristics and model of synaptic data, the architecture of SNN has been proposed. There are two layers in this network including input layer and output layer. Every input neuron has one-to-one correspondence with the pixel in an image. The types in output layer have been divided into 10 blocks. And the number of output neurons n in each block is designed as appropriate. In this architecture, we choose n=40 in consideration of the classification accuracy and power consumption, as is shown in figure 2. The orange So there is only one layer of synapses whose weights can be adjusted by the STDP characteristics of Ag/GeSe/TiN device. The concrete rules of adjusting the weights include bidirectional transmission, integrate-andfire, and lateral inhibition. The number of the synapses is 784*40*10=313,600.
In the training process, the pre-neuron will produce a spike if the related pixel is white in the image at the beginning, and the pre-neurons will produce nothing which is related to a black pixel. Then the post-neurons will receive the spikes transported by the memristors. The postspike produced by the post-neuron will be overlapped with the pre-spike, as shown in Figure 3. The voltage on the overlapped area will exceed the threshold and change the memristors' conductance. To increase the degree of differentiation, the synapses in n blocks are trained in order.
The testing process is similar to the training process, but there are some differences between testing process and training process. Firstly, the pre-neurons will produce small negative voltage which are related to black pixel. Secondly, the voltage of overlapped area is not big enough for the memristors to change their conductance. Lastly, the post-spikes will be compared with each other, and the block of biggest voltage is the tested label.  In order to test the performance of the architecture, we choose the MNIST benchmark. The MNIST training set (60,000 examples) is designed to train the architecture and the classification accuracy can be attained through testing sets (10,000). Every image has 28*28=784 pixels, which have one-to-one correspondence with the input neurons. The corresponding input neurons receive a pulse, if the pixel value exceeds the threshold we set. And vice versa, if the pixel value can't reach the threshold, the corresponding input neurons receive fixed small negative voltage. Before training, we divide all the training set into 10 types according to their labels. For example, all the training set labeled '4' will be imported into block 4. And then synaptic weights in block 4 will study n major styles of '4' without supervision, as shown in figure 4. The training simulation takes about 43 seconds, and the final classification accuracy is about 90.2%.

Impact analysis of device variability
The performance of the SNN architecture has been tested on the MNIST dataset and the result is good. However, the memristors can not be adjusted accurately according to the requirements of the algorithm for lack of sufficient awareness of them. In the meanwhile, the result will be so terrible that the influence of memristors' variability can not be analyzed if the architecture is tested on the MNIST dataset considering the device variability. So the similar architecture is adopted to verify the classification accuracy of the group of 5*5-pixel Chinese numerals considering the device variability, as shown in figure 5. the number of preneurons is 25, and the number of post-neurons is 10. And the others are the same.
In this section, we try to discuss the impact of device variability with three types, namely maximum resistance R max , minimum resistance R min and device yield Dy. The final resistance is random number between A*(1-B/2) and A*(1+B/2) in Table 1 and Table 2, where A represents R max or R min , B represents fluctuation range Fr.
In Table 1, it can be seen that the classification accuracy keeps being 1 in spite of the increase of synaptic maximum resistance and synaptic resistance fluctuation range. It shows that the variability of synaptic maximum resistance makes little difference. Table 2 suggests that the classification accuracy decreases with the increase of synaptic minimum resistance. And so does fluctuation range. The classification accuracy decreases as the synaptic resistance fluctuation range increases. It indicates that the variability of synaptic minimum resistance makes some difference, and the increase of synaptic minimum resistance and fluctuation range reduce the classification accuracy.
In Table 3, we discuss the impact of device failure T f from three aspects namely opposite T o , maximum decrease T md and minimum decrease T mi from top to bottom. It can be seen that classification accuracy decreases obviously with the decrease of device yield D y , and these three types of device failure have similar classification accuracy in the same device yield. It implies that the device yield makes great difference.
The tables show that compared with the maximum resistance, the minimum resistance variability of the synapse device makes more impact, and whatever device yield exerts greater effect under the same circumstances. When a SNN architecture is proposed, keeping the device yield is the first priority, which is followed by the synaptic minimum resistance and fluctuation range. And the last is Table 1 The impact of device variability of synaptic maximum resistance for classification accuracy   the maximum resistance and synaptic resistance fluctuation range.

Conclusion
In this paper, SNN architecture is proposed based on data model of the Ag/GeSe/TiN memristor, which has useful STDP characteristic. The architecture has been tested on the MNIST dataset and obtains 90.2% classification accuracy. What's more, the performance of the architecture has been verified considering the device variability. The results show that, compared with the maximum resistance, the minimum resistance variability of the synapse device makes more impact, and whatever device yield exerts greater effect on this pattern recognition task. The conclusion obtained in this work, can be useful for fabricating practical synapses devices to construct feasible SNN.