Runchun Mark Wang

Runchun Mark Wang
The MARCS Institute, University of Western Sydney
Sydney, Australia

Speaker of Workshop 3

Will talk about: Neuromorphic Engineering: New computational paradigms inspired by the brain

Bio sketch:

Runchun Mark Wang is a postdoctoral fellow at The MARCS Institute. Dr Wang's supervisor is Professor André van Schaik and his research field is Neuromorphic Engineering. The project he is working on is Hardware Acceleration for Neural Systems. As part of this project, Dr Wang will help build an electronic system, which includes both FPGA implementation and mixed/signal analogue VLSI, which is capable of simulating neural networks of a size similar to that of the human brain. Mark will provide open source software, so that other researchers can also use the system. His PhD topic was “Neuromorphic Implementations of Polychronous Spiking Neural Networks”. The work includes the design of a polychronous spiking neural network using a novel delay-adaptation algorithm, an FPGA implementation of the proposed neural network, an analogue implementation of the proposed neural network, and their integration into a mixed-signal platform. Before Mark started his PhD study, he worked as a SoC/ASIC design engineer in industry. - See more at:

Talk abstract:

The human brain is the most complex and powerful computing device in the known universe.  It's ability to learn, recognise complex patterns and make sense of the world is truly remarkable as is its compact, power efficient hardware.  Since the dawn of scientific inquiry the brain and the origin of human intelligence has been the subject of much debate and research. The field of Artificial Intelligence (AI) was born with a paper by Warren McCulloch and Walter Pitts in 1943 that described a simplified neuron model.  Since this seminal work, the field of AI has branched off into sub-disciplines including Neural Networks (NNs), Machine Learning and Computational Neuroscience (CNS).  Neuromorphic Engineering (NE) is another offshoot that approaches AI from a different point-of-view albeit with similar aims: to emulate the extraordinary performance of the brain. NE was born in 1989 when CalTech professor, Carver Mead, published “Analog VLSI and Neural Systems” a breakthrough publication that related the biology and physiology of neurons with the physics of transistors.  This book paved the way for the neuromorphic approach that attempts to gain insight into biological functionality by building hardware systems that are subject to similar physical constraints - noise, area, power consumption and so on, as well as build intelligent computing machines.  For much of the 26 year history of NE the focus has been on building sensory devices such as silicon retinas and silicon cochleas, however, in recent years the focus has been on building learning and classification networks that may one day replace conventional computer architectures.  In this presentation, I’ll give a brief introduction to NE.  I’ll also discuss some of the neuromorphic networks that we have developed in my lab and their potential applications, particularly in machine learning.