数学学科Seminar第2581讲 基于变分法的图像反问题的注意力/Transformer机制

创建时间:  2023/11/24  龚惠英   浏览次数:   返回

报告题目 (Title):Variational Model based Attention/Transformer Mechanisms for Image Inverse Problem (基于变分法的图像反问题的注意力/Transformer机制)

报告人 (Speaker):刘君 副教授(北京师范大学)

报告时间 (Time):2023年11月24日(周五) 10:00

报告地点 (Place):腾讯会议 533326207

邀请人(Inviter):彭亚新 教授

主办部门:永利数学系

报告摘要:Features extracted by the deep convolution neural networks (DCNN) are always complicated and difficult to model. We developed a method to integrate the features prior into the DCNN architectures by a variational method. It is built upon the universal approximation property of the probability density functions for the mixture distributions. By considering the duality of the maximum likelihood estimation for the deep features in high dimension space, several mechanisms such as learnable fidelity, regularizer, segmentation with geometry prior would be proposed. It partly reveals the connections between the variational methods and some popular DCNN architectures in image processing. For example, weighted norm and attention, nonlocal regularization and transformer, dual and translation, multi-grid and encoder-decoder U-net.

上一条:数学学科Seminar第2582讲 面向学习优化的图机器学习——以数学规划AI求解器为例

下一条:数学学科Seminar第2580讲 空间曲线与KP方程的孤立子


数学学科Seminar第2581讲 基于变分法的图像反问题的注意力/Transformer机制

创建时间:  2023/11/24  龚惠英   浏览次数:   返回

报告题目 (Title):Variational Model based Attention/Transformer Mechanisms for Image Inverse Problem (基于变分法的图像反问题的注意力/Transformer机制)

报告人 (Speaker):刘君 副教授(北京师范大学)

报告时间 (Time):2023年11月24日(周五) 10:00

报告地点 (Place):腾讯会议 533326207

邀请人(Inviter):彭亚新 教授

主办部门:永利数学系

报告摘要:Features extracted by the deep convolution neural networks (DCNN) are always complicated and difficult to model. We developed a method to integrate the features prior into the DCNN architectures by a variational method. It is built upon the universal approximation property of the probability density functions for the mixture distributions. By considering the duality of the maximum likelihood estimation for the deep features in high dimension space, several mechanisms such as learnable fidelity, regularizer, segmentation with geometry prior would be proposed. It partly reveals the connections between the variational methods and some popular DCNN architectures in image processing. For example, weighted norm and attention, nonlocal regularization and transformer, dual and translation, multi-grid and encoder-decoder U-net.

上一条:数学学科Seminar第2582讲 面向学习优化的图机器学习——以数学规划AI求解器为例

下一条:数学学科Seminar第2580讲 空间曲线与KP方程的孤立子

Baidu
sogou