登录

注册

EN

Invertible Generalized Synchronization: a General Principle for Dynamical Learning in Neural Networks

发布日期:2021-04-06 作者: 编辑:瞿磊 来源:兰州理论物理中心

主讲人:卢至欣博士后(宾夕法尼亚大学)

题目:Invertible Generalized Synchronization: a General Principle for Dynamical Learning in Neural Networks

时间:2021年0408日上午10:00

会议ID:(腾讯会议)231 912 043

线下地点:理工楼1201

联系人:黄亮

报告摘要:

The human brain is a complex, nonlinear dynamical system that can swiftly learn various dynamical tasks from exemplary sensory input. Similarly, a reservoir computer (RC), a type of recurrent neural network, can be trained with historical data from a dynamical system and predict its future. Could the human brain and the RC share a common learning mechanism? Can we build artificial learning systems that emulate human's learning ability? To shed light on these questions, I propose a universal, biologically plausible learning principle—invertible generalized synchronization (IGS). With the IGS, neural networks can learn complex dynamics in a model-free manner through attractor embedding, and support many other human-like learning functions. In reminiscent of human cognitive functions, the post-learning neural network can switch between learned tasks autonomously or induced by external cues. By leveraging the IGS, I also demonstrated that a neural network could infer values of the unmeasured dynamical variables, and even infer unseen dynamical bifurcations. The IGS is general enough to be applicable across many physical devices beyond traditional neural networks, and allows for the principled study and precise design of dynamical system-based AI.

个人简介:

卢至欣,宾夕法尼亚大学博士后。研究方向为非线性动力系统与混沌,复杂系统,人工智能和计算认知神经科学。2017年博士毕业于马里兰大学帕克分校。在博士期间,师从Edward Ott,研究课题涵盖非线性动力学、神经网络与机器学习部分成果发表在PRL和Chaos杂志上,并被CNN, Washington Post, New York Times, Fox News等主流媒体报道。2017年至今,在宾夕法尼亚大学做博士后研究,师从Danielle S. Bassett。主要研究成果为使用动力系统理论和复杂网络,揭示人脑智能(human intelligence)和人工智能之间的内在联系,并借此构建动力系统模型, 实现了多种类似人脑的学习功能

 

手机版

官方微信

访问信息

地区活动

联系我们