数学学科Seminar第2479讲 几乎最优的VC维度和伪维度界限用于深度神经网络导数

创建时间:  2023/10/14  龚惠英   浏览次数:   返回

报告题目 (Title):几乎最优的VC维度和伪维度界限用于深度神经网络导数

报告人 (Speaker):杨雅鸿 博士 (宾夕法尼亚州立大学)

报告时间 (Time):2023年10月19日(周四) 9:00

报告地点 (Place):腾讯会议(696406234)

邀请人(Inviter):秦晓雪

主办部门:永利数学系

报告摘要:This paper addresses the problem of nearly optimal Vapnik--Chervonenkis dimension (VC-dimension) and pseudo-dimension estimations of the derivative functions of deep neural networks (DNNs). Two important applications of these estimations include: 1) Establishing a nearly tight approximation result of DNNs in the Sobolev space; 2) Characterizing the generalization error of machine learning methods with loss functions involving function derivatives. This theoretical investigation fills the gap of learning error estimations for a wide range of physics-informed machine learning models and applications including generative models, solving partial differential equations, operator learning, network compression, distillation, regularization, etc.

上一条:数学学科Seminar第2480讲 基于数据驱动和分数阶差分方程理论的深度学习

下一条:数学学科Seminar第2478讲 基于得分的输运建模用于均场Fokker-Planck方程


数学学科Seminar第2479讲 几乎最优的VC维度和伪维度界限用于深度神经网络导数

创建时间:  2023/10/14  龚惠英   浏览次数:   返回

报告题目 (Title):几乎最优的VC维度和伪维度界限用于深度神经网络导数

报告人 (Speaker):杨雅鸿 博士 (宾夕法尼亚州立大学)

报告时间 (Time):2023年10月19日(周四) 9:00

报告地点 (Place):腾讯会议(696406234)

邀请人(Inviter):秦晓雪

主办部门:永利数学系

报告摘要:This paper addresses the problem of nearly optimal Vapnik--Chervonenkis dimension (VC-dimension) and pseudo-dimension estimations of the derivative functions of deep neural networks (DNNs). Two important applications of these estimations include: 1) Establishing a nearly tight approximation result of DNNs in the Sobolev space; 2) Characterizing the generalization error of machine learning methods with loss functions involving function derivatives. This theoretical investigation fills the gap of learning error estimations for a wide range of physics-informed machine learning models and applications including generative models, solving partial differential equations, operator learning, network compression, distillation, regularization, etc.

上一条:数学学科Seminar第2480讲 基于数据驱动和分数阶差分方程理论的深度学习

下一条:数学学科Seminar第2478讲 基于得分的输运建模用于均场Fokker-Planck方程

Baidu
sogou