One of the chief questions in the artificial intelligence world today is this: what is the appropriate biological plausibility of artificial neural networks (ANNs)? (Put simply, how closely should these models resemble the human brain?)
How do we avoid over- or under-engineering ANNs, or complex networks that serve language modeling, object recognition and a variety of other AI tasks?
Here, we explore a few key components of the issue.
What makes a model biologically plausible?
ANNs that feature biologically plausible characteristics often include ‘spiking neurons and connection weights’ and some neural networks may even be capable of supporting human behavioral data (Voekler, 2015). In some models basic units may differ on a functional level but in theory resemble neurons and synapses.
To what extent should a model be biological plausible?
One thing is certain: there is no one-size-fits-all approach. An ANN that has few qualities of a human brain may be out of touch with the task at hand. At the other extreme, there may be little benefit spending efforts to emulate the human brain. As Kriegeskorte (2015) states, ‘one criticism of using complex neural networks to model brain information processing is that it replaces one impenetrably complex network with another.’ Going further, Kriegeskorte also points out that the crowd is divided when it comes to whether biological plausibility is even necessary in the first place.
How do we determine the appropriate level of biological plausibility?
The best place to start is by looking at the challenges or problems the model must tackle. Simple tasks like detecting basic shapes and forms (i.e. a 50px X 50px black square on a white background) may not require accurate modeling of the human visual system to meet computational needs. If the goal rather is for an AI system to not only recognize but understand input (i.e. complex sounds like speech), the human brain will serve as the most powerful example of processing the task efficiently.
If biological plausibility is the desired route, the model-building process should be viewed similarly to how human learning occurs. As Lake and colleagues (2016) eloquently state, ‘just as scientists seek to explain nature, not simply predict it, we see human intelligence as a model-building activity.’ This complex model-building approach will not only produce more efficient models but in turn drive more interdisciplinary research on de-mystifying brain functions.
Regardless of the degree to which you feel that biological plausibility has its place in AI systems, the combined efforts of producing more efficient ANNs and driving brain science research will be critical to the advancement of artificial intelligence.
At DimensionalMechanics we value the power of incorporating biological plausibility into AI at both the lower, neural level and higher, cognitive level. Incorporating functionality of the human brain from multiple levels allows for more human-like capabilities to meet the world’s growing technological demands.
Kriegeskorte, N. (2015). Deep neural networks: A new framework for modeling biological
vision and brain information processing. Annual Review of Vision Science, 1, 417-446.
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2016). Building machines that learn and think like people. arXiv preprint arXiv:1604.00289.
Voelker, A. R. (2015). A Biologically Plausible Sum-Product Network for Language Modeling.
About the author
Dominique Simmons heads cognitive research at DimensionalMechanics, leading studies to inform the development of our AI platform, NeoPulse.