机器之心 & ArXiv Weekly Radiostation
参与:杜伟、楚航、罗若天
本周的主要论文包括:第一次无残差连接或归一层也可以训练深度 Transformer 探索性研究,以及 DeepMind 携其写代码 AI AlphaCode 登上了 Science 编写代码的能力不亚于程序员。
目录:
Competition-level code generation with AlphaCode
Inverse scaling can become U-shaped
FedALA: Adaptive Local Aggregation for Personalized Federated Learning
An Efficient Training Approach for Very Large Scale Face Recognition
Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation
EVA: Exploring the Limits of Masked Visual Representation Learning at Scale
Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
ArXiv Weekly Radiostation:NLP、CV、ML 更多精选论文(附音频)
论文 1:Competition-level code generation with AlphaCode
作者:YUJIA LI 等
论文地址:https://www.science.org/doi/10.1126/science.abq1158
摘要:今年年初,DeepMind 发布了基于 Transformer 的新模型 AlphaCode,该模型实现了大规模代码生成。AlphaCode 又在《Science》新论文上发表,研究上发表《Science》封面。
推荐:DeepMind 携 AlphaCode 登 Science 编写代码的能力不亚于程序员。论文 2:Inverse scaling can become U-shaped
作者:Jason Wei 等
论文地址:https://arxiv.org/pdf/2211.02011.pdf
摘要:在许多任务中,语言模型越大,性能就越好。是否有一种情况:由于模型规模的增加,某些任务的结果会变得更糟?谷歌最近发表的一篇论文可能会给我们答案。获得 Inverse Scaling 奖励任务如下:Negation QA、Hindsight Neglect、Quote Repetition 和 Redefine Math。
推荐:模型越大,性能越差?谷歌收集了让大模型崩溃的任务,创造了新的基准。论文 3:FedALA: Adaptive Local Aggregation for Personalized Federated Learning
作者:Jianqing Zhang 等
论文地址:https://arxiv.org/pdf/2212.01197.pdf
摘要:本文提出了一种自适应联邦学习的本地聚合方法,通过自动从整体模型中捕获客户机所需的信息来处理联邦学习中的统计异质性问题。作者对比了 11 个 SOTA 超越模型的最佳方法 3.27% 性能优异。作者最多将自适应本地聚合模块应用于其他联邦学习方法 24.19% 的提升。本文被 AAAI 2023 会议收录,下图为自适应本地聚合(ALA)过程。
推荐:超越 SOTA 3.27%,上交大提出了适应当地聚合的新方法。论文 4:An Efficient Training Approach for Very Large Scale Face Recognition
作者:Kai Wang 等
论文地址:https://arxiv.org/pdf/2105.10375.pdf
摘要:本文主要介绍了现有的超大规模分类框架解决方案和低成本分类框架 FFC 相应的原理及 trick 介绍这篇文章 CVPR 2022 会议收录,下图为 SOTA 方法比较。
推荐:达摩院开源低成本大规模分类框架 FFC。论文 5:Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation
作者:匿名
论文地址:https://openreview.net/pdf?id=NPrsUQgMjKK
摘要:ICLR 2023 盲审阶段的这篇论文首次证明没有残余连接或归一化层的情况下,也可能成功训练深度 transformer。为此,他们研究了深度无残差 transformer 有三种方法可以阻止信号传播和秩序崩溃。
具体来说,该方法采用以下组合:参数初始化、偏置矩阵和位置相关的重缩放,并强调 transformer 中信号传输的几种独特复杂性,包括与位置编码和因果掩蔽的交互。研究人员证明,他们的方法可以产生训练深度 transformer。
推荐:ICLR 在盲审阶段,被评为赞不绝口的论文:会是吗? Transformer 架构创新大吗?论文 6:EVA: Exploring the Limits of Masked Visual Representation Learning at Scale
作者:Yuxin Fang 等
论文地址:https://arxiv.org/pdf/2211.07636.pdf
摘要:智源开源简单强大,具有 10 视觉基础模型参数亿 EVA,结合最强的语义学习和最强的几何结构学习 ImageNet 分类、COCO 检测分割、Kinetics 在广泛的视觉感知任务中,如视频分类取得了最强的性能。
推荐:10 亿参数,多项 SOTA,智源开源视觉基础模型 EVA。论文 7:Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
作者:Nianhui Guo 等
论文地址:https://arxiv.org/pdf/2211.12933.pdf%E3%80%81
摘要:来自德国 Hasso Plattner 计算机系统工程研究所 Nianhui Guo 和 Haojin Yang 等待研究人员提出 BNext 模型,成为第一个 ImageNet 数据集上 top1 突破分类精度 80% 的 BNN。下图为基于 ImageNet 的 SOTA BNN 性能对比。、
推荐:首个在 ImageNet 上精度超过 80% 二值神经网络 BNext 问世。ArXiv Weekly Radiostation
由楚航和罗若天共同发起的机器之心 ArXiv Weekly Radiostation,在 7 Papers 本周更重要的论文将被选中,包括 NLP、CV、ML 领域各 10 本文选取并提供音频形式的论文摘要,详细介绍如下:
本周 10 篇 NLP 精选论文为:
1. Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE. ( from Yu Qiao, Xinbo Gao, Xiaoou Tang, Dacheng Tao )
2. Learning to Dub Movies via Hierarchical Prosody Models. ( from Ming-Hsuan Yang, Qingming Huang )
3. Improving Simultaneous Machine Translation with Monolingual Data. ( from Dacheng Tao )
4. Intermediate Entity-based Sparse Interpretable Representation Learning. ( from Joydeep Ghosh )
5. a survey on GPT-3. ( from Bhaskar Krishnamachari )
6. ZeroKBC: A Comprehensive Benchmark for Zero-Shot Knowledge Base Completion. ( from Hongming Zhang )
7. Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation. ( from Minlie Huang )
8. KPT: Keyword-guided Pre-training for Grounded Dialog Generation. ( from Minlie Huang )
9. LawngNLI: A Long-Premise Benchmark for In-Domain Generalization from Short to Long Contexts and for Implication-Based Retrieval. ( from Dan Roth )
10. SoftCorrect: Error Correction with Soft Detection for Automatic Speech Recognition. ( from Xiang-Yang Li, Tie-Yan Liu )
本周 10 篇 CV 精选论文为:
1. NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors. ( from Leonidas Guibas, Dragomir Anguelov )
2. ALTO: Alternating Latent Topologies for Implicit 3D Reconstruction. ( from Leonidas Guibas )
3. Improving Zero-shot Generalization and Robustness of Multi-modal Models. ( from Ming-Hsuan Yang, Laurent Itti )
4. Self-supervised AutoFlow. ( from Ming-Hsuan Yang )
5. Consistency-Aware Anchor Pyramid Network for Crowd Localization. ( from Qingming Huang, Ming-Hsuan Yang, Nicu Sebe )
6. UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation. ( from Ming-Hsuan Yang ) 7. Progressive Multi-resolution Loss for Crowd Counting. ( from Qingming Huang, Ming-Hsuan Yang )
8. Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly Supervised Video Anomaly Detection. ( from Qingming Huang, Ming-Hsuan Yang )
9. AsyInst: Asymmetric Affinity with DepthGrad and Color for Box-Supervised Instance Segmentation. ( from Alan Yuille )
10. Discovering Class-Specific GAN Controls for Semantic Image Synthesis. ( from Bernt Schiele )
本周 10 篇 CV 精选论文为:
1. Learning Graph Search Heuristics. ( from Jure Leskovec, Pietro Li ò )
2. Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve. ( from Jimmy Ba )
3. AL-iGAN: An Active Learning Framework for Tunnel Geological Reconstruction Based on TBM Operational Data. ( from Dacheng Tao )
4. Specifying Behavior Preference with Tiered Reward Functions. ( from Michael L. Littman )
5. Benchmarking AutoML algorithms on a collection of binary problems. ( from Jason H. Moore )
6. On the Global Solution of Soft k-Means. ( from Feiping Nie, Xuelong Li )
7. On the Importance of Clinical Notes in Multi-modal Learning for EHR Data. ( from Gunnar R tsch )
8. PRISM: Probabilistic Real-Time Inference in Spatial World Models. ( from Daniel Cremers )
9. Intervening With Confidence: Conformal Prescriptive Monitoring of Business Processes. ( from Marlon Dumas )
10. Contactless Oxygen Monitoring with Gated Transformer. ( from Dina Katabi )
THE END
转载请联系本微信官方账号授权
提交或寻求报告:content@jiqizhixin.com