Researchers at Los Alamos National Laboratory have found that spiking neural networks become unstable after unbroken periods of unsupervised self-training. Moreover, these “artificial brains” seem to restabilize after they’re given the equivalent of a good night’s rest.
Those are the words of Yijing Watkins, a Los Alamos Lab computer scientist, as told to the lab’s news division.
The research team made the discovery while working to imbue their neural networks with the capacity for learning how to see.
Garrett Kenyon, also a Los Alamos Lab computer scientist, explains that network instabilities arise when developers use spiking neuromorphic processors that are biologically realistic—or when studying the processors to understand biology itself.
Kenyon says most researchers working with machine learning, deep learning and AI “never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
The team plans to try their algorithm on a neuromorphic chip, testing if periods of rest allow the chip to steadily process information sent to it from a silicon retina in real time.
“If the findings confirm the need for sleep in artificial brains,” the lab’s news team offers, “we can probably expect the same to be true of androids and other intelligent machines that may come about in the future.”
Click here to read the rest.