Real-world data processing applications require compact, low-latency, low-power computing systems. With event-driven computing capabilities, complementary metal-oxide-semiconductor hybrid memristive neuromorphic architectures provide an ideal hardware foundation for such tasks. To demonstrate the full potential of such systems, we propose and experimentally demonstrate a comprehensive sensor processing solution for real-world object localization applications. Drawing inspiration from barn owl neuroanatomy, we have developed a bioinspired, event-driven object localization system that combines a state-of-the-art piezoelectric micromechanical transducer transducer with computational graph-based neuromorphic resistive memory. We show measurements of a fabricated system that includes a memory-based resistive coincidence detector, delay line circuitry, and a fully customizable ultrasonic transducer. We use these experimental results to calibrate simulations at the system level. These simulations are then used to evaluate the angular resolution and energy efficiency of the object localization model. The results show that our approach can be several orders of magnitude more energy efficient than microcontrollers performing the same task.
We are entering an era of ubiquitous computing where the number of devices and systems deployed is growing exponentially to help us in our daily lives. These systems are expected to run continuously, consuming as little power as possible while learning to interpret the data they collect from multiple sensors in real time and produce binary output as a result of classification or recognition tasks. One of the most important steps required to achieve this goal is extracting useful and compact information from noisy and often incomplete sensory data. Conventional engineering approaches typically sample sensor signals at a constant and high rate, generating large amounts of data even in the absence of useful inputs. In addition, these methods use complex digital signal processing techniques to pre-process the (often noisy) input data. Instead, biology offers alternative solutions for processing noisy sensory data using energy-efficient, asynchronous, event-driven approaches (spikes)2,3. Neuromorphic computing takes inspiration from biological systems to reduce computational costs in terms of energy and memory requirements compared to traditional signal processing methods4,5,6. Recently, innovative general purpose brain-based systems implementing impulse neural networks (TrueNorth7, BrainScaleS8, DYNAP-SE9, Loihi10, Spinnaker11) have been demonstrated. These processors provide low power, low latency solutions for machine learning and cortical circuit modeling. To fully exploit their energy efficiency, these neuromorphic processors must be directly connected to event-driven sensors12,13. However, today there are only a few touch devices that directly provide event-driven data. Prominent examples are dynamic visual sensors (DVS) for vision applications such as tracking and motion detection14,15,16,17 the silicon cochlea18 and neuromorphic auditory sensors (NAS)19 for auditory signal processing, olfactory sensors20 and numerous examples21,22 of touch. . texture sensors.
In this paper, we present a newly developed event-driven auditory processing system applied to object localization. Here, for the first time, we describe an end-to-end system for object localization obtained by connecting a state-of-the-art piezoelectric micromachined ultrasonic transducer (pMUT) with a computational graph based on neuromorphic resistive memory (RRAM). In-memory computing architectures using RRAM are a promising solution for reducing power consumption23,24,25,26,27,28,29. Their inherent non-volatility—not requiring active power consumption to store or update information—is a perfect fit with the asynchronous, event-driven nature of neuromorphic computing, resulting in near-no power consumption when the system is idle. Piezoelectric micromachined ultrasonic transducers (pMUTs) are inexpensive, miniaturized silicon-based ultrasonic transducers capable of acting as transmitters and receivers30,31,32,33,34. To process the signals received by the built-in sensors, we drew inspiration from barn owl neuroanatomy35,36,37. The barn owl Tyto alba is known for its remarkable night hunting abilities thanks to a very efficient auditory localization system. To calculate the location of prey, the barn owl’s localization system encodes the time of flight (ToF) when sound waves from prey reach each of the owl’s ears or sound receptors. Given the distance between the ears, the difference between the two ToF measurements (Interaural Time Difference, ITD) makes it possible to analytically calculate the azimuth position of the target. Although biological systems are poorly suited to solving algebraic equations, they can solve localization problems very effectively. The barn owl nervous system uses a set of coincidence detector (CD)35 neurons (ie, neurons capable of detecting temporal correlations between spikes that propagate downward to convergent excitatory endings)38,39 organized into computational graphs to solve positioning problems.
Previous research has shown that complementary metal-oxide-semiconductor (CMOS) hardware and RRAM-based neuromorphic hardware inspired by the inferior colliculus (“auditory cortex”) of the barn owl is an efficient method for calculating position using ITD13, 40, 41, 42, 43 , 44, 45, 46. However, the potential of complete neuromorphic systems that link auditory cues to neuromorphic computational graphs has yet to be demonstrated. The main problem is the inherent variability of analog CMOS circuits, which affects the accuracy of match detection. Recently, alternative numerical implementations of the ITD47 estimates have been demonstrated. In this paper, we propose to use the ability of RRAM to change the conductance value in a non-volatile manner to counteract variability in analog circuits. We implemented an experimental system consisting of one pMUT transmitting membrane operating at a frequency of 111.9 kHz, two pMUT receiving membranes (sensors) simulating barn owl ears, and one . We experimentally characterized the pMUT detection system and RRAM-based ITD computational graph to test our localization system and evaluate its angular resolution.
We compare our method with a digital implementation on a microcontroller performing the same localization task using conventional beamforming or neuromorphic methods, as well as a field programmable gate array (FPGA) for ITD estimation proposed in the reference. 47. This comparison highlights the competitive power efficiency of the proposed RRAM-based analog neuromorphic system.
One of the most striking examples of an accurate and efficient object localization system can be found in barn owl35,37,48. At dusk and dawn, the barn owl (Tyto Alba) primarily relies on passive listening, actively seeking out small prey such as voles or mice. These auditory experts can localize auditory signals from prey with astonishing accuracy (about 2°)35, as shown in Fig. 1a. Barn owls infer the location of sound sources in the azimuth (horizontal) plane from the difference in incoming time of flight (ITD) from the sound source to the two ears. The ITD computational mechanism was proposed by Jeffress49,50 which relies on neural geometry and requires two key components: an axon, a neuron’s nerve fiber acting as a delay line, and an array of coincidence detector neurons organized into a computational system. graph as shown in Figure 1b. The sound reaches the ear with an azimuth dependent time delay (ITD). The sound is then converted into a spike pattern in each ear. The axons of the left and right ears act as delay lines and converge on CD neurons. Theoretically, only one neuron in an array of matched neurons will receive input at a time (where the delay cancels out exactly) and will fire maximally (neighboring cells will also fire, but at a lower frequency). Activating certain neurons encodes the position of the target in space without further converting the ITD to angles. This concept is summarized in Figure 1c: for example, if the sound is coming from the right side when the input signal from the right ear travels a longer path than the path from the left ear, compensating for the number of ITDs, for example, when neuron 2 matches. In other words, each CD responds to a certain ITD (also known as optimal delay) due to axonal delay. Thus, the brain converts temporal information into spatial information. Anatomical evidence for this mechanism has been found37,51. Phase-locked macronucleus neurons store temporal information about incoming sounds: as their name implies, they fire at certain signal phases. Coincidence detector neurons of the Jeffress model can be found in the laminar core. They receive information from macronuclear neurons, whose axons act as delay lines. The amount of delay provided by the delay line can be explained by the length of the axon, as well as another myelination pattern that changes the conduction velocity. Inspired by the auditory system of the barn owl, we have developed a biomimetic system for localizing objects. The two ears are represented by two pMUT receivers. The sound source is the pMUT transmitter located between them (Fig. 1a), and the computational graph is formed by a grid of RRAM-based CD circuits (Fig. 1b, green), playing the role of CD neurons whose inputs are delayed. through the circuit, the delay lines (blue) act like axons in the biological counterpart. The proposed sensory system differs in operating frequency from that of the owl, whose auditory system operates in the 1–8 kHz range, but pMUT sensors operating at about 117 kHz are used in this work. The selection of an ultrasonic transducer is considered according to technical and optimization criteria. First, limiting the receive bandwidth to a single frequency ideally improves measurement accuracy and simplifies the post-processing step. In addition, operation in ultrasound has the advantage that the emitted pulses are not audible, therefore do not disturb people, since their auditory range is ~20-20 kHz.
the barn owl receives sound waves from a target, in this case moving prey. The time of flight (ToF) of the sound wave is different for each ear (unless the prey is directly in front of the owl). The dotted line shows the path that sound waves take to reach the barn owl’s ears. Prey can be accurately localized in the horizontal plane based on the length difference between the two acoustic paths and the corresponding interaural time difference (ITD) (left image inspired by ref. 74, copyright 2002, Society for Neuroscience). In our system, the pMUT transmitter (dark blue) generates sound waves that bounce off the target. Reflected ultrasound waves are received by two pMUT receivers (light green) and processed by the neuromorphic processor (right). b An ITD (Jeffress) computational model describing how sounds entering the barn owl’s ears are first encoded as phase-locked spikes in the large nucleus (NM) and then using a geometrically arranged grid of matched detector neurons in the lamellar nucleus. Processing (Netherlands) (left). Illustration of a neuroITD computational graph combining delay lines and coincidence detector neurons, the owl biosensor system can be modeled using RRAM-based neuromorphic circuits (right). c Schematic of the main Jeffress mechanism, due to the difference in ToF, the two ears receive sound stimuli at different times and send axons from both ends to the detector. The axons are part of a series of coincidence detector (CD) neurons, each of which responds selectively to strongly time-correlated inputs. As a result, only CDs whose inputs arrive with the smallest time difference are maximally excited (ITD is exactly compensated). The CD will then encode the target’s angular position.
Piezoelectric micromechanical ultrasonic transducers are scalable ultrasonic transducers that can be integrated with advanced CMOS technology31,32,33,52 and have lower initial voltage and power consumption than traditional volumetric transducers53. In our work, the membrane diameter is 880 µm, and the resonant frequency is distributed in the range of 110–117 kHz (Fig. 2a, see Methods for details). In a batch of ten test devices, the average quality factor was about 50 (ref. 31). The technology has reached industrial maturity and is not bioinspired per se. Combining information from different pMUT films is a well-known technique, and angle information can be obtained from pMUTs using, for example, beamforming techniques31,54. However, the signal processing required to extract the angle information is not suitable for low power measurements. The proposed system combines the neuromorphic data preprocessing circuit pMUT with an RRAM-based neuromorphic computing graph inspired by the Jeffress model (Figure 2c), providing an alternative energy-efficient and resource-constrained hardware solution. We performed an experiment in which two pMUT sensors were placed approximately 10 cm apart to exploit the different ToF sounds received by the two receiving membranes. One pMUT acting as a transmitter sits between the receivers. The target was a PVC plate 12 cm wide, located at a distance D in front of the pMUT device (Fig. 2b). The receiver records the sound reflected from the object and reacts as much as possible during the passage of the sound wave. Repeat the experiment by changing the position of the object, determined by the distance D and the angle θ. Inspired by a link. 55, we propose a neuromorphic pre-processing of pMUT raw signals to convert reflected waves into peaks to input a neuromorphic computational graph. The ToF corresponding to the peak amplitude is extracted from each of the two channels and encoded as the exact timing of the individual peaks. On fig. 2c shows the circuitry required to interface the pMUT sensor with an RRAM-based computational graph: for each of the two pMUT receivers, the raw signal is band-pass filtered to smooth, rectify, and then passed to the leaky integrator in overcoming mode. the dynamic threshold (Fig. 2d) creates an output event (spike) and firing (LIF) neuron: the output spike time encodes the detected flight time. The LIF threshold is calibrated against the pMUT response, thereby reducing pMUT variability from device to device. With this approach, instead of storing the entire sound wave in memory and processing it later, we simply generate a peak corresponding to the ToF of the sound wave, which forms the input to the resistive memory computational graph. The spikes are sent directly to the delay lines and parallelized with match detection modules in neuromorphic computation graphs. Because they are sent to the gates of the transistors, no additional amplification circuitry is required (see Supplementary Fig. 4 for details). To evaluate the localization angular accuracy provided by pMUT and the proposed signal processing method, we measured the ITD (that is, the difference in time between peak events generated by two receivers) as the distance and angle of the object varied. The ITD analysis was then converted to angles (see Methods) and plotted against the position of the object: the uncertainty in the measured ITD increased with distance and angle to the object (Fig. 2e,f). The main problem is the peak-to-noise ratio (PNR) in the pMUT response. The farther the object, the lower the acoustic signal, thereby reducing the PNR (Fig. 2f, green line). A decrease in PNR leads to an increase in uncertainty in the ITD estimate, resulting in an increase in localization accuracy (Fig. 2f, blue line). For an object at a distance of 50 cm from the transmitter, the angular accuracy of the system is approximately 10°. This limitation imposed by the characteristics of the sensor can be improved. For example, the pressure sent by the emitter can be increased, thereby increasing the voltage driving the pMUT membrane. Another solution to amplify the transmitted signal is to connect multiple transmitters 56. These solutions will increase the detection range at the expense of increased energy costs. Additional improvements can be made on the receiving side. The pMUT’s receiver noise floor can be significantly reduced by improving the connection between the pMUT and the first stage amplifier, which is currently done with wire connections and RJ45 cables.
a Image of a pMUT crystal with six 880 µm membranes integrated at 1.5 mm pitch. b Diagram of the measuring setup. The target is located at azimuth position θ and at distance D. The pMUT transmitter generates a 117.6 kHz signal that bounces off the target and reaches two pMUT receivers with different time-of-flight (ToF). This difference, defined as the inter-aural time difference (ITD), encodes the position of an object and can be estimated by estimating the peak response of the two receiver sensors. c Schematic of pre-processing steps for converting the raw pMUT signal into spike sequences (i.e. input to the neuromorphic computation graph). The pMUT sensors and neuromorphic computational graphs have been fabricated and tested, and the neuromorphic pre-processing is based on software simulation. d Response of the pMUT membrane upon receipt of a signal and its transformation into a spike domain. e Experimental localization angular accuracy as a function of object angle (Θ) and distance (D) to the target object. The ITD extraction method requires a minimum angular resolution of approximately 4°C. f Angular accuracy (blue line) and corresponding peak-to-noise ratio (green line) versus object distance for Θ = 0.
Resistive memory stores information in a non-volatile conductive state. The basic principle of the method is that the modification of the material at the atomic level causes a change in its electrical conductivity57. Here we use an oxide-based resistive memory consisting of a 5nm layer of hafnium dioxide sandwiched between top and bottom titanium and titanium nitride electrodes. The conductivity of RRAM devices can be changed by applying a current/voltage waveform that creates or breaks conductive filaments of oxygen vacancies between the electrodes. We co-integrated such devices58 into a standard 130 nm CMOS process to create a fabricated reconfigurable neuromorphic circuit implementing a coincidence detector and a delay line circuit (Fig. 3a). The non-volatile and analog nature of the device, combined with the event-driven nature of the neuromorphic circuit, minimizes power consumption. The circuit has an instant on/off function: it operates immediately after being turned on, allowing the power to be turned off completely when the circuit is idle. The main building blocks of the proposed scheme are shown in fig. 3b. It consists of N parallel single-resistor single-transistor (1T1R) structures that encode synaptic weights from which weighted currents are taken, injected into the common synapse of a differential pair integrator (DPI)59, and finally injected into the synapse with integration and leakage. activated (LIF) neuron 60 (see Methods for details). The input surges are applied to the gate of the 1T1R structure in the form of a sequence of voltage pulses with a duration on the order of hundreds of nanoseconds. Resistive memory can be placed in a high conductive state (HCS) by applying an external positive reference to Vtop when Vbottom is grounded, and reset to a low conductive state (LCS) by applying a positive voltage to Vbottom when Vtop is grounded. The average value of HCS can be controlled by limiting the programming current (compliance) of the SET (ICC) by the gate-source voltage of the series transistor (Fig. 3c). The functions of RRAM in the circuit are twofold: they direct and weight the input pulses.
Scanning electron microscope (SEM) image of a blue HfO2 1T1R RRAM device integrated in 130 nm CMOS technology with selector transistors (650 nm wide) in green. b Basic building blocks of the proposed neuromorphic schema. The input voltage pulses (peaks) Vin0 and Vin1 consume current Iweight, which is proportional to the conduction states G0 and G1 of the 1T1R structure. This current is injected into the DPI synapses and excites the LIF neurons. RRAM G0 and G1 are installed in HCS and LCS respectively. c Function of cumulative conductance density for a group of 16K RRAM devices as a function of ICC current matching, which effectively controls the conduction level. d Circuit measurements in (a) showing that G1 (in the LCS) effectively blocks input from Vin1 (green), and indeed the output neuron’s membrane voltage responds only to the blue input from Vin0. RRAM effectively determines the connections in the circuit. e Measurement of the circuit in (b) showing the effect of the conductance value G0 on the membrane voltage Vmem after applying a voltage pulse Vin0. The more conductance, the stronger the response: thus, the RRAM device implements I/O connection weighting. Measurements were made on the circuit and demonstrate the dual function of RRAM, routing and weighting of input pulses.
First, since there are two basic conduction states (HCS and LCS), RRAMs can block or miss input pulses when they are in the LCS or HCS states, respectively. As a result, RRAM effectively determines the connections in the circuit. This is the basis for being able to reconfigure the architecture. To demonstrate this, we will describe a fabricated circuit implementation of the circuit block in Fig. 3b. The RRAM corresponding to G0 is programmed into the HCS, and the second RRAM G1 is programmed into the LCS. Input pulses are applied to both Vin0 and Vin1. The effects of two sequences of input pulses were analyzed in the output neurons by collecting the neuron membrane voltage and the output signal using an oscilloscope. The experiment was successful when only the HCS device (G0) was connected to the neuron’s pulse to stimulate membrane tension. This is demonstrated in Figure 3d, where the blue pulse train causes the membrane voltage to build up on the membrane capacitor, while the green pulse train keeps the membrane voltage constant.
The second important function of RRAM is the implementation of connection weights. Using RRAM’s analog conductance adjustment, I/O connections can be weighted accordingly. In the second experiment, the G0 device was programmed to different levels of HCS, and the input pulse was applied to the VIn0 input. The input pulse draws a current (Iweight) from the device, which is proportional to the conductance and the corresponding potential drop Vtop − Vbot. This weighted current is then injected into the DPI synapses and LIF output neurons. The membrane voltage of the output neurons was recorded using an oscilloscope and displayed in Fig. 3d. The voltage peak of the neuron membrane in response to a single input pulse is proportional to the conductance of the resistive memory, demonstrating that RRAM can be used as a programmable element of synaptic weight. These two preliminary tests show that the proposed RRAM-based neuromorphic platform is able to implement the basic elements of the basic Jeffress mechanism, namely the delay line and the coincidence detector circuit. The circuit platform is built by stacking successive blocks side by side, such as the blocks in Figure 3b, and connecting their gates to a common input line. We designed, fabricated, and tested a neuromorphic platform consisting of two output neurons receiving two inputs (Fig. 4a). The circuit diagram is shown in Figure 4b. The upper 2 × 2 RRAM matrix allows input pulses to be directed to two output neurons, while the lower 2 × 2 matrix allows recurrent connections of two neurons (N0, N1). We demonstrate that this platform can be used with a delay line configuration and two different coincidence detector functions, as shown by experimental measurements in Fig. 4c-e.
Circuit diagram formed by two output neurons N0 and N1 receiving two inputs 0 and 1. The top four devices of the array define synaptic connections from input to output, and the bottom four cells define recurrent connections between neurons. The colored RRAMs represent the devices configured in the HCS on the right: the devices in the HCS allow connections and represent weights, while the devices in the LCS block input pulses and disable connections to outputs. b Diagram of circuit (a) with eight RRAM modules highlighted in blue. c Delay lines are formed by simply using the dynamics of DPI synapses and LIF neurons. The green RRAM is set to conductance high enough to be able to induce a glitch at the output after the input delay Δt. d Schematic illustration of direction-insensitive CD detection of time dependent signals. Output neuron 1, N1, fires on inputs 0 and 1 with a short delay. e Direction sensitive CD circuit, a circuit that detects when input 1 approaches input 0 and arrives after input 0. The output of the circuit is represented by neuron 1 (N1).
The delay line (Figure 4c) simply uses the dynamic behavior of DPI synapses and LIF neurons to reproduce the input spike from Vin1 to Vout1 by delaying Tdel. Only the G3 RRAM connected to Vin1 and Vout1 is programmed in HCS, the rest of the RRAMs are programmed in LCS. The G3 device was programmed for 92.6 µs to ensure that each input pulse increases the membrane voltage of the output neuron sufficiently to reach the threshold and generate a delayed output pulse. The delay Tdel is determined by the synaptic and neural time constants. Coincidence detectors detect the occurrence of temporally correlated but spatially distributed input signals. Direction-insensitive CD relies on individual inputs converging to a common output neuron (Figure 4d). The two RRAMs connecting Vin0 and Vin1 to Vout1, G2 and G4 respectively are programmed for high conduction. Simultaneous arrival of spikes on Vin0 and Vin1 increases the voltage of the N1 neuron membrane above the threshold required to generate the output spike. If the two inputs are too far apart in time, the charge on the membrane voltage accumulated by the first input may have time to decay, preventing the membrane potential N1 from reaching the threshold value. G1 and G2 are programmed for approximately 65 µs, which ensures that a single input surge does not increase the membrane voltage enough to cause an output surge. Coincidence detection between events distributed in space and time is a fundamental operation used in a wide range of sensing tasks such as optical flow based obstacle avoidance and sound source localization. Thus, computing direction-sensitive and insensitive CDs is a fundamental building block for constructing visual and audio localization systems. As shown by the characteristics of the time constants (see Supplementary Fig. 2), the proposed circuit implements a suitable range of four orders of magnitude time scales. Thus, it can simultaneously meet the requirements of visual and sound systems. Directional-sensitive CD is a circuit that is sensitive to the spatial order of arrival of pulses: from right to left and vice versa. It is a fundamental building block in the basic motion detection network of the Drosophila visual system, used to calculate motion directions and detect collisions62. To achieve a direction-sensitive CD, two inputs must be directed to two different neurons (N0, N1) and a directional connection must be established between them (Fig. 4e). When the first input is received, NO reacts by increasing the voltage across its membrane above the threshold value and sending out a surge. This output event, in turn, fires N1 thanks to the directional connection highlighted in green. If an input event Vin1 arrives and energizes N1 while its membrane voltage is still high, N1 generates an output event indicating that a match has been found between the two inputs. Directional connections allow the N1 to emit output only if input 1 comes after input 0. G0, G3, and G7 are programmed to 73.5 µS, 67.3 µS, and 40.2 µS, respectively, ensuring that a single spike on the input Vin0 causes a delayed output spike, while N1′s membrane potential only reaches threshold when both input bursts arrive in sync. .
Variability is a source of imperfection in modeled neuromorphic systems63,64,65. This leads to heterogeneous behavior of neurons and synapses. Examples of such disadvantages include 30% (mean standard deviation) variability in input gain, time constant, and refractory period, to name but a few (see Methods). This problem is even more pronounced when multiple neural circuits are connected together, such as an orientation-sensitive CD consisting of two neurons. To work properly, the gain and decay time constants of the two neurons should be as similar as possible. For example, a large difference in input gain can cause one neuron to overreact to an input pulse while the other neuron is barely responsive. On fig. Figure 5a shows that randomly selected neurons respond differently to the same input pulse. This neural variability is relevant, for example, to the function of direction-sensitive CDs. In the scheme shown in fig. 5b, c, the input gain of neuron 1 is much higher than that of neuron 0. Thus, neuron 0 requires three input pulses (instead of 1) to reach the threshold, and neuron 1, as expected, needs two input events. Implementing spike time-dependent biomimetic plasticity (STDP) is a possible way to mitigate the impact of imprecise and sluggish neural and synaptic circuits on system performance43. Here we propose to use the plastic behavior of resistive memory as a means of influencing the enhancement of neural input and reducing the effects of variability in neuromorphic circuits. As shown in fig. 4e, conductance levels associated with RRAM synaptic mass effectively modulated the corresponding neural membrane voltage response. We use an iterative RRAM programming strategy. For a given input, the conductance values of the synaptic weights are reprogrammed until the target behavior of the circuit is obtained (see Methods).
a Experimental measurements of the response of nine randomly selected individual neurons to the same input pulse. The response varies across populations, affecting input gain and time constant. b Experimental measurements of the influence of neurons on the variability of neurons affecting direction-sensitive CD. The two direction-sensitive CD output neurons respond differently to input stimuli due to neuron-to-neuron variability. Neuron 0 has a lower input gain than neuron 1, so it takes three input pulses (instead of 1) to create an output spike. As expected, neuron 1 reaches the threshold with two input events. If input 1 arrives Δt = 50 µs after neuron 0 fires, CD remains silent because Δt is greater than the time constant of neuron 1 (about 22 µs). c is reduced by Δt = 20 µs, so that input 1 peaks when neuron 1′s firing is still high, resulting in the simultaneous detection of two input events.
The two elements used in the ITD calculation column are the delay line and the direction insensitive CD. Both circuits require precise calibration to ensure good object positioning performance. The delay line must deliver a precisely delayed version of the input peak (Fig. 6a), and the CD must be activated only when the input falls within the target detection range. For the delay line, the synaptic weights of the input connections (G3 in Fig. 4a) were reprogrammed until the target delay was obtained. Set a tolerance around the target delay to stop the program: the smaller the tolerance, the more difficult it is to successfully set the delay line. On fig. Figure 6b shows the results of the delay line calibration process: it can be seen that the proposed scheme can exactly provide all the delays required in the design scheme (from 10 to 300 μs). The maximum number of calibration iterations affects the quality of the calibration process: 200 iterations can reduce the error to less than 5%. One calibration iteration corresponds to a set/reset operation of an RRAM cell. The tuning process is also critical to improving the accuracy of CD module instant close event detection. It took ten calibration iterations to achieve a true positive rate (i.e., the rate of events correctly identified as relevant) above 95% (blue line in Figure 6c). However, the tuning process did not affect false positive events (that is, the frequency of events that were erroneously identified as relevant). Another method observed in biological systems for overcoming the time constraints of rapidly activating pathways is redundancy (that is, many copies of the same object are used to perform a given function). Inspired by biology66, we placed several CD circuits in each CD module between the two delay lines to reduce the impact of false positives. As shown in fig. 6c (green line), placing three CD elements in each CD module can reduce the false alarm rate to less than 10–2.
a Effect of neuronal variability on delay line circuits. b Delay line circuits can be scaled to large delays by setting the time constants of the corresponding LIF neurons and DPI synapses to large values. Increasing the number of iterations of the RRAM calibration procedure made it possible to significantly improve the accuracy of the target delay: 200 iterations reduced the error to less than 5%. One iteration corresponds to a SET/RESET operation on an RRAM cell. Each CD module in the c Jeffress model can be implemented using N parallel CD elements for greater flexibility with respect to system failures. d More RRAM calibration iterations increase the true positive rate (blue line), while the false positive rate is independent of the number of iterations (green line). Placing more CD elements in parallel avoids false detection of CD module matches.
We now evaluate the performance and power consumption of the end-to-end integrated object localization system shown in Figure 2 using measurements of the acoustic properties of the pMUT sensor, CD, and delay line circuits that make up the neuromorphic computing graph. Jeffress model (Fig. 1a). As for the neuromorphic computing graph, the greater the number of CD modules, the better the angular resolution, but also the higher the energy of the system (Fig. 7a). A compromise can be reached by comparing the accuracy of individual components (pMUT sensors, neurons, and synaptic circuits) with the accuracy of the entire system. The resolution of the delay line is limited by the time constants of the simulated synapses and neurons, which in our scheme exceed 10 µs, which corresponds to an angular resolution of 4° (see Methods). More advanced nodes with CMOS technology will allow the design of neural and synaptic circuits with lower time constants, resulting in higher accuracy of the delay line elements. However, in our system, the accuracy is limited by the error pMUT in estimating the angular position, i.e. 10° (blue horizontal line in Fig. 7a). We fixed the number of CD modules at 40, which corresponds to an angular resolution of about 4°, i.e., the angular accuracy of the computational graph (light blue horizontal line in Fig. 7a). At the system level, this gives a resolution of 4° and an accuracy of 10° for objects located 50 cm in front of the sensor system. This value is comparable to the neuromorphic sound localization systems reported in ref. 67. A comparison of the proposed system with the state of the art can be found in Supplementary Table 1. Adding additional pMUTs, increasing the acoustic signal level, and reducing electronic noise are possible ways to further improve localization accuracy. ) is estimated at 9.7. nz. 55. Given 40 CD units on the computational graph, the SPICE simulation estimated the energy per operation (i.e., object positioning energy) to be 21.6 nJ. The neuromorphic system is activated only when an input event arrives, i.e. when an acoustic wave reaches any pMUT receiver and exceeds the detection threshold, otherwise it remains inactive. This avoids unnecessary power consumption when there is no input signal. Considering a frequency of localization operations of 100 Hz and an activation period of 300 µs per operation (the maximum possible ITD), the power consumption of the neuromorphic computing graph is 61.7 nW. With neuromorphic pre-processing applied to each pMUT receiver, the power consumption of the entire system reaches 81.6 nW. To understand the energy efficiency of the proposed neuromorphic approach compared to conventional hardware, we compared this number to the energy required to perform the same task on a modern low power microcontroller using either neuromorphic or conventional beamforming68 Skill. The neuromorphic approach considers an analog-to-digital converter (ADC) stage, followed by a band-pass filter and an envelope extraction stage (Teeger-Kaiser method). Finally, a threshold operation is performed to extract the ToF. We have omitted the calculation of ITD based on ToF and the conversion to estimated angular position since this occurs once for each measurement (see Methods). Assuming a sampling rate of 250 kHz on both channels (pMUT receivers), 18 band pass filter operations, 3 envelope extraction operations, and 1 threshold operation per sample, the total power consumption is estimated at 245 microwatts. This uses the microcontroller’s low-power mode69, which turns on when the algorithms are not executing, which reduces power consumption to 10.8 µW. The power consumption of the beamforming signal processing solution proposed in the reference. 31, with 5 pMUT receivers and 11 beams uniformly distributed in the azimuth plane [-50°, +50°], is 11.71 mW (see the Methods section for details). In addition, we report the power consumption of an FPGA47-based Time Difference Encoder (TDE) estimated at 1.5 mW as a replacement for the Jeffress model for object localization. Based on these estimates, the proposed neuromorphic approach reduces power consumption by five orders of magnitude compared to a microcontroller using classical beamforming techniques for object localization operations. Adopting a neuromorphic approach to signal processing on a classic microcontroller reduces power consumption by about two orders of magnitude. The effectiveness of the proposed system can be explained by the combination of an asynchronous resistive-memory analog circuit capable of performing in-memory calculations and the lack of analog-to-digital conversion required to perceive signals.
a Angular resolution (blue) and power consumption (green) of the localization operation depending on the number of CD modules. The dark blue horizontal bar represents the angular accuracy of the PMUT and the light blue horizontal bar represents the angular accuracy of the neuromorphic computational graph. b Power consumption of the proposed system and comparison with the two discussed microcontroller implementations and digital implementation of the Time Difference Encoder (TDE)47 FPGA.
To minimize the power consumption of the target localization system, we conceived, designed and implemented an efficient, event-driven RRAM-based neuromorphic circuit that processes the signal information generated by the built-in sensors to calculate the position of the target object in real time. . While traditional processing methods continuously sample detected signals and perform calculations to extract useful information, the proposed neuromorphic solution performs calculations asynchronously as useful information arrives, maximizing system power efficiency by five orders of magnitude. In addition, we highlight the flexibility of RRAM-based neuromorphic circuits. The ability of RRAM to change conductance in a non-volatile manner (plasticity) compensates for the inherent variability of ultra-low power analog DPI’s synaptic and neural circuits. This makes this RRAM-based circuit versatile and powerful. Our goal is not to extract complex functions or patterns from signals, but to localize objects in real time. Our system can also efficiently compress the signal and eventually send it to further processing steps to make more complex decisions when needed. In the context of localization applications, our neuromorphic preprocessing step can provide information about the location of objects. This information can be used, for example, for motion detection or gesture recognition. We emphasize the importance of combining ultra low power sensors such as pMUTs with ultra low power electronics. For this, neuromorphic approaches have been key as they have led us to develop new circuit implementations of biologically inspired computational methods such as the Jeffress model. In the context of sensor fusion applications, our system can be combined with several different event-based sensors to obtain more accurate information. Although owls are excellent at finding prey in the dark, they have excellent eyesight and perform a combined auditory and visual search before catching prey70. When a particular auditory neuron fires, the owl receives the information it needs to determine in which direction to start its visual search, thus focusing its attention on a small part of the visual scene. A combination of visual sensors (DVS camera) and a proposed listening sensor (based on pMUT) should be explored for the development of future autonomous agents.
The pMUT sensor is located on a PCB with two receivers approximately 10 cm apart, and the transmitter is located between the receivers. In this work, each membrane is a suspended bimorph structure consisting of two layers of piezoelectric aluminum nitride (AlN) 800 nm thick sandwiched between three layers of molybdenum (Mo) 200 nm thick and coated with a layer 200 nm thick. the top passivating SiN layer as described in the reference. 71. The inner and outer electrodes are applied to the bottom and top layers of molybdenum, while the middle molybdenum electrode is unpatterned and used as a ground, resulting in a membrane with four pairs of electrodes.
This architecture allows the use of a common membrane deformation, resulting in improved transmit and receive sensitivity. Such a pMUT typically exhibits an excitation sensitivity of 700 nm/V as an emitter, providing a surface pressure of 270 Pa/V. As a receiver, one pMUT film exhibits a short circuit sensitivity of 15 nA/Pa, which is directly related to the piezoelectric coefficient of AlN. The technical variability of the voltage in the AlN layer leads to a change in the resonant frequency, which can be compensated by applying a DC bias to the pMUT. DC sensitivity was measured at 0.5 kHz/V. For acoustic characterization, a microphone is used in front of the pMUT.
To measure the echo pulse, we placed a rectangular plate with an area of about 50 cm2 in front of the pMUT to reflect the emitted sound waves. Both the distance between the plates and the angle relative to the pMUT plane are controlled using special holders. A Tectronix CPX400DP voltage source biases three pMUT membranes, tuning the resonant frequency to 111.9 kHz31, while the transmitters are driven by a Tectronix AFG 3102 pulse generator tuned to the resonant frequency (111.9 kHz) and a duty cycle of 0.01. The currents read from the four output ports of each pMUT receiver are converted to voltages using a special differential current and voltage architecture, and the resulting signals are digitized by the Spektrum data acquisition system. The limit of detection was characterized by pMUT signal acquisition under different conditions: we moved the reflector to different distances [30, 40, 50, 60, 80, 100] cm and changed the pMUT support angle ([0, 20, 40] o) Figure 2b shows the temporal ITD detection resolution depending on the corresponding angular position in degrees.
This article uses two different off-the-shelf RRAM circuits. The first is an array of 16,384 (16,000) devices (128 × 128 devices) in a 1T1R configuration with one transistor and one resistor. The second chip is the neuromorphic platform shown in Fig. 4a. The RRAM cell consists of a 5 nm thick HfO2 film embedded in a TiN/HfO2/Ti/TiN stack. The RRAM stack is integrated into the back-of-line (BEOL) of the standard 130nm CMOS process. RRAM-based neuromorphic circuits present a design challenge for all-analog electronic systems in which RRAM devices coexist with traditional CMOS technology. In particular, the conduction state of the RRAM device must be read and used as a function variable for the system. To this end, a circuit was designed, fabricated and tested that reads the current from the device when an input pulse is received and uses this current to weight the response of a differential pair integrator (DPI) synapse. This circuit is shown in Figure 3a, which represents the basic building blocks of the neuromorphic platform in Figure 4a. An input pulse activates the gate of the 1T1R device, inducing a current through RRAM proportional to the device’s conductance G (Iweight = G(Vtop – Vx)). The inverting input of the operational amplifier (op-amp) circuit has a constant DC bias voltage Vtop. The op-amp’s negative feedback will provide Vx = Vtop by providing equal current from M1. The current Iweight retrieved from the device is injected into the DPI synapse. A stronger current will result in more depolarization, so RRAM conductance effectively implements synaptic weights. This exponential synaptic current is injected through the membrane capacitor of the Leaky Integration and Excitation (LIF) neurons, where it is integrated as a voltage. If the threshold voltage of the membrane (the switching voltage of the inverter) is overcome, the output part of the neuron is activated, producing an output spike. This pulse returns and shunts the neuron’s membrane capacitor to ground, causing it to discharge. This circuit is then supplemented with a pulse expander (not shown in Fig. 3a), which shapes the output pulse of the LIF neuron to the target pulse width. Multiplexers are also built into each line, allowing voltage to be applied to the top and bottom electrodes of the RRAM device.
Electrical testing includes analyzing and recording the dynamic behavior of analog circuits, as well as programming and reading RRAM devices. Both steps require special tools, all of which are connected to the sensor board at the same time. Access to RRAM devices in neuromorphic circuits is carried out from external tools through a multiplexer (MUX). The MUX separates the 1T1R cell from the rest of the circuitry to which it belongs, allowing the device to be read and/or programmed. To program and read RRAM devices, a Keithley 4200 SCS machine is used in conjunction with an Arduino microcontroller: the first for accurate pulse generation and current reading, and the second for quick access to individual 1T1R elements in the memory array. The first operation is to form the RRAM device. The cells are selected one by one and a positive voltage is applied between the top and bottom electrodes. In this case, the current is limited to the order of tens of microamperes due to the supply of the corresponding gate voltage to the selector transistor. The RRAM cell can then cycle between a low conductive state (LCS) and a high conductive state (HCS) using RESET and SET operations, respectively. The SET operation is carried out by applying a rectangular voltage pulse with a duration of 1 μs and a peak voltage of 2.0-2.5 V to the upper electrode, and a sync pulse of a similar shape with a peak voltage of 0.9-1.3 V to the gate of the selector transistor. These values allow modulate RRAM conductance at 20-150 µs intervals. For RESET, a 1 µs wide, 3 V peak pulse is applied to the bottom electrode (bit line) of the cell when the gate voltage is in the range of 2.5-3.0 V. The inputs and outputs of the analog circuits are dynamic signals. For input, we interleaved two HP 8110 pulse generators with Tektronix AFG3011 signal generators. The input pulse has a width of 1 µs and a rise/fall edge of 50 ns. This type of pulse is assumed to be a typical glitch in analog glitch based circuits. As for the output signal, the output signal was recorded using a Teledyne LeCroy 1 GHz oscilloscope. The acquisition speed of an oscilloscope has been proven not to be a limiting factor in the analysis and acquisition of circuit data.
Using the dynamics of analog electronics to simulate the behavior of neurons and synapses is an elegant and efficient solution to improve computational efficiency. The disadvantage of this computational underlay is that it will vary from scheme to scheme. We quantified the variability of neurons and synaptic circuits (Supplementary Fig. 2a,b). Of all the manifestations of variability, those associated with time constants and input gain have the greatest impact at the system level. The time constant of the LIF neuron and the DPI synapse is determined by an RC circuit, where the value of R is controlled by a bias voltage applied to the gate of the transistor (Vlk for the neuron and Vtau for the synapse), determining the leakage rate. Input gain is defined as the peak voltage reached by the synaptic and neuronal membrane capacitors stimulated by an input pulse. The input gain is controlled by another bias transistor which modulates the input current. A Monte Carlo simulation calibrated on ST Microelectronics’ 130nm process was performed to collect some input gain and time constant statistics. The results are presented in Supplementary Figure 2, where the input gain and time constant are quantified as a function of the bias voltage controlling the leakage rate. Green markers quantify the standard deviation of the time constant from the mean. Both neurons and synaptic circuits were able to express a wide range of time constants in the range of 10-5-10-2 s, as shown in Supplementary Fig. scheme. Input amplification (Supplementary Fig. 2e,d) of neuronal and synapse variability was approximately 8% and 3%, respectively. Such a deficiency is well documented in the literature: various measurements were performed on the array of DYNAP chips to assess the mismatch between populations of LIF63 neurons. The synapses in the BrainScale mixed signal chip were measured and their inconsistencies analyzed, and a calibration procedure was proposed to reduce the effect of system-level variability64.
The function of RRAM in neuromorphic circuits is twofold: architecture definition (routing inputs to outputs) and implementation of synaptic weights. The latter property can be used to solve the problem of the variability of the modeled neuromorphic circuits. We have developed a simple calibration procedure that involves reprogramming the RRAM device until the circuit being analyzed meets certain requirements. For a given input, the output is monitored and the RRAM is reprogrammed until the target behavior is achieved. A wait time of 5 s was introduced between programming operations to solve the problem of RRAM relaxation resulting in transient conductance fluctuations (Supplementary Information). Synaptic weights are adjusted or calibrated according to the requirements of the neuromorphic circuit being modeled. The calibration procedure is summarized in additional algorithms [1, 2] that focus on two fundamental features of neuromorphic platforms, delay lines and direction insensitive CD. For a circuit with a delay line, the target behavior is to provide an output pulse with a delay Δt. If the actual circuit delay is less than the target value, the synaptic weight of G3 should be reduced (G3 should be reset and then set to a lower matching current Icc). Conversely, if the actual delay is greater than the target value, the conductance of G3 must be increased (G3 must first be reset and then set to a higher Icc value). This process is repeated until the delay generated by the circuit matches the target value and a tolerance is set to stop the calibration process. For orientation-insensitive CDs, two RRAM devices, G1 and G3, are involved in the calibration process. This circuit has two inputs, Vin0 and Vin1, delayed by dt. The circuit should only respond to delays below the matching range [0,dtCD]. If there is no output peak, but the input peak is close, both RRAM devices should be boosted to help the neuron reach the threshold. Conversely, if the circuit responds to a delay that exceeds the target range of dtCD, the conductance must be reduced. Repeat the process until the correct behavior is obtained. Compliance current can be modulated by the built-in analog circuit in ref. 72.73. With this built-in circuit, such procedures can be performed periodically to calibrate the system or reuse it for another application.
We evaluate the power consumption of our neuromorphic signal processing approach on a standard 32-bit microcontroller68. In this evaluation, we assume operation with the same setup as in this paper, with one pMUT transmitter and two pMUT receivers. This method uses a bandpass filter, followed by an envelope extraction step (Teeger-Kaiser), and finally a thresholding operation is applied to the signal to extract the time of flight. The calculation of the ITD and its conversion to detection angles are omitted in the evaluation. We consider a band pass filter implementation using a 4th order infinite impulse response filter requiring 18 floating point operations. Envelope extraction uses three more floating point operations, and the last operation is used to set the threshold. A total of 22 floating point operations are required to preprocess the signal. The transmitted signal is a short burst of 111.9 kHz sine waveform generated every 10 ms resulting in a positioning operating frequency of 100 Hz. We used a sampling rate of 250 kHz to comply with Nyquist and a 6 ms window for each measurement to capture a range of 1 meter. Note that 6 milliseconds is the flight time of an object that is 1 meter away. This provides a power consumption of 180 µW for A/D conversion at 0.5 MSPS. Signal preprocessing is 6.60 MIPS (instructions per second), generating 0.75 mW. However, the microcontroller may switch to a low power mode 69 when the algorithm is not running. This mode provides a static power consumption of 10.8 μW and a wake-up time of 113 μs. Given a clock frequency of 84 MHz, the microcontroller completes all operations of the neuromorphic algorithm within 10 ms, and the algorithm calculates a duty cycle of 6.3%, thus using a low power mode. The resulting power dissipation is 244.7 μW. Note that we omit the ITD output from ToF and the conversion to detection angle, thus underestimating the power consumption of the microcontroller. This provides additional value for the energy efficiency of the proposed system. As an additional comparison condition, we evaluate the power consumption of the classical beamforming methods proposed in the reference. 31.54 when embedded in the same microcontroller68 at 1.8V supply voltage. Five evenly spaced pMUT membranes are used to acquire data for beamforming. As for the processing itself, the beamforming method used is delay summation. It simply consists of applying a delay to the lanes that corresponds to the expected difference in arrival times between one lane and the reference lane. If the signals are in phase, the sum of these signals will have a high energy after a time shift. If they are out of phase, destructive interference will limit the energy of their sum. in a relationship. On fig. 31, a sampling rate of 2 MHz is selected to time shift the data by an integer number of samples. A more modest approach is to maintain a coarser sample rate of 250 kHz and use a Finite Impulse Response (FIR) filter to synthesize fractional delays. We will assume that the complexity of the beamforming algorithm is mainly determined by the time shift, since each channel is convolved with a FIR filter with 16 taps in each direction. To calculate the number of MIPS required for this operation, we consider a window of 6ms per measurement to capture a range of 1 meter, 5 channels, 11 beamforming directions (range +/- 50° in 10° steps). 75 measurements per second pushed the microcontroller to its maximum of 100 MIPS. Link. 68, resulting in a power dissipation of 11.26 mW for a total power dissipation of 11.71 mW after adding the onboard ADC contribution.
Data supporting the results of this study are available from the respective author, FM, upon reasonable request.
Indiveri, G. & Sandamirskaya, Y. The importance of space and time for signal processing in neuromorphic agents: The challenge of developing low-power, autonomous agents that interact with the environment. Indiveri, G. & Sandamirskaya, Y. The importance of space and time for signal processing in neuromorphic agents: The challenge of developing low-power, autonomous agents that interact with the environment. Indiveri G. and Sandamirskaya Y. The importance of space and time for signal processing in neuromorphic agents: the challenge of developing low-power autonomous agents interacting with the environment. Indiveri, G. & Sandamirskaya, Y. 空间和时间对于神经形态代理中信号处理的重要性:开发与环境交互的低功耗、自主代理的挑战。 Indiveri, G. & Sandamirskaya, Y. Indiveri G. and Sandamirskaya Y. The importance of space and time for signal processing in neuromorphic agents: the challenge of developing low-power autonomous agents interacting with the environment. IEEE Signal Processing. Journal 36, 16–28 (2019).
Thorpe, S. J. Peak Arrival Time: An Efficient Neural Network Coding Scheme. in Eckmiller, R., Hartmann, G. & Hauske, G. (eds). in Eckmiller, R., Hartmann, G. & Hauske, G. (eds). in Eckmiller, R., Hartmann, G. and Hauske, G. (eds.). In Eckmiller, R., Hartmann, G., and Hauske, G. (eds.). Parallel processing in neural systems and computers 91–94 (North-Holland Elsevier, 1990).
Levy, WB & Calvert, VG Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. Levy, WB & Calvert, VG Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. Levy, W. B. and Calvert, W. G. Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict the number of synapses. Levy, WB & Calvert, VG Communication 消耗的能量是人类皮层计算的35 倍,但这两种成本都需要预测突触数量。 Levy, WB & Calvert, VG Communication Levy, W. B. and Calvert, W. G. Communication consumes 35 times more energy than computation in the human cortex, but both costs require predicting the number of synapses. process. National Academy of Science. the science. US 118, https://doi.org/10.1073/pnas.2008173118 (2021).
Dalgaty, T., Vianello, E., De Salvo, B. & Casas, J. Insect-inspired neuromorphic computing. Dalgaty, T., Vianello, E., De Salvo, B. & Casas, J. Insect-inspired neuromorphic computing. Dalgati, T., Vianello, E., DeSalvo, B. and Casas, J. Insect-inspired neuromorphic computing. Dalgati T., Vianello E., DeSalvo B. and Casas J. Insect-inspired neuromorphic computing. Current. Opinion. Insect science. 30, 59–66 (2018).
Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Roy, K., Jaiswal, A. & Panda, P. Towards Spike-Based Machine Intelligence with Neuromorphic Computing. Roy K, Jaiswal A, and Panda P. Pulse-based artificial intelligence using neuromorphic computing. Nature 575, 607–617 (2019).
Indiveri, G. & Liu, S.-C. Indiveri, G. & Liu, S.-C. Indiveri, G. and Liu, S.-K. Indiveri, G. & Liu, S.-C. Indiveri, G. & Liu, S.-C. Indiveri, G. and Liu, S.-K. Memory and information processing in neuromorphic systems. process. IEEE 103, 1379–1397 (2015).
Akopyan F. et al. Truenorth: Design and toolkit for a 65 mW 1 million neuron programmable synaptic chip. IEEE transactions. Computer design of integrated circuit systems. 34, 1537–1557 (2015).
Schemmel, J. et al. Live demo: scaled down version of the BrainScaleS neuromorphic system at plate scale. 2012 IEEE International Symposium on Circuits and Systems (ISCAS), (IEEE ed.) 702–702 (2012).
Moradi, S., Qiao, N., Stefanini, F. & Indiveri, G. A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). Moradi, S., Qiao, N., Stefanini, F. & Indiveri, G. A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). Moradi S., Qiao N., Stefanini F. and Indiviri G. A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAP). Moradi, S.、Qiao, N.、Stefanini, F. & Indiveri, G. 一种可扩展的多核架构,具有用于动态神经形态异步处理器(DYNAP) 的异构内存结构。 Moradi, S.、Qiao, N.、Stefanini, F. & Indiveri, G. A kind of expandable multi-core architecture, with a unique memory structure for dynamic neural processing (DYNAP). Moradi S., Qiao N., Stefanini F. and Indiviri G. A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAP). IEEE Transactions on Biomedical science. electrical system. 12, 106–122 (2018).
Davis, M. et al. Loihi: A neuromorphic multi-core processor with embedded learning. IEEE Micro 38, 82–99 (2018).
Furber, SB, Galluppi, F., Temple, S. & Plana, LA The SpiNNaker project. Furber, SB, Galluppi, F., Temple, S. & Plana, L.A. The SpiNNaker project. Ferber S.B., Galluppi F., Temple S. and Plana L.A. SpiNNaker project. Ferber S.B., Galluppi F., Temple S. and Plana L.A. SpiNNaker project. process. IEEE 102, 652–665 (2014).
Liu, S.-K. & Delbruck, T. Neuromorphic sensory systems. & Delbruck, T. Neuromorphic sensory systems. and Delbrück T. Neuromorphic sensory systems. & Delbruck, T. 神经形态感觉系统。 & Delbruck, T. and Delbrück T. Neuromorphic sensory system. Current. Opinion. Neurobiology. 20, 288–295 (2010).
Chope, T. et al. Neuromorphic sensory integration for combined sound source localization and collision avoidance. In 2019 at the IEEE Conference on Biomedical Circuits and Systems (BioCAS), (IEEE Ed.) 1–4 (2019).
Risi, N., Aimar, A., Donati, E., Solinas, S. & Indiveri, G. A spike-based neuromorphic architecture of stereo vision. Risi, N., Aimar, A., Donati, E., Solinas, S. & Indiveri, G. A spike-based neuromorphic architecture of stereo vision. Risi N, Aymar A, Donati E, Solinas S, and Indiveri G. A spike-based neuromorphic stereovision architecture. Risi, N., Aimar, A., Donati, E., Solinas, S. & Indiveri, G. 一种基于脉冲的立体视觉神经形态结构。 Risi, N., Aimar, A., Donati, E., Solinas, S. & Indiveri, G. Risi N, Aimar A, Donati E, Solinas S, and Indiveri G. Spike-based neuromorphic architecture for stereo vision. front. Neurorobotics 14, 93 (2020).
Osswald, M., Ieng, S.-H., Benosman, R. & Indiveri, G. A spiking neural network model of 3Dperception for event-based neuromorphic stereo vision systems. Osswald, M., Ieng, S.-H., Benosman, R. & Indiveri, G. A spiking neural network model of 3Dperception for event-based neuromorphic stereo vision systems. Oswald, M., Ieng, S.-H., Benosman, R., and Indiveri, G. A 3D Pulsed Neural Network Perception Model for Event-Based Neuromorphic Stereo Vision Systems. Osswald, M., Ieng, S.-H., Benosman, R. & Indiveri, G. 基于事件的神经形态立体视觉系统的3Dperception 脉冲神经网络模型。 Osswald, M., Ieng, S.-H., Benosman, R. & Indiveri, G. 3Dperception 脉冲神经网络模型。 Oswald, M., Ieng, S.-H., Benosman, R., and Indiveri, G. Spiked 3Dperception Neural Network Model for an Event-Based Neuromorphic Stereo Vision System. the science. Report 7, 1–11 (2017).
Dalgaty, T. et al. Insect-inspired basic motion detection includes resistive memory and bursty neural networks. Bionic biohybrid system. 10928, 115–128 (2018).
D’Angelo, G. et al. Event-based eccentric motion detection using temporal differential coding. front. Neurology. 14, 451 (2020).
Post time: Nov-17-2022