全部 |
  • 全部
  • 题名
  • 关键词
  • NSTL主题词
  • 摘要
  • 会议名称
  • 论文-出处
  • 论文-作者
  • 论文-机构
  • 论文-DOI
  • 会议-出版者
  • 会议-出版地
  • 会议-主编
  • 会议-主办单位
  • 会议-举办地
  • ISSN
  • EISSN
  • ISBN
  • EISBN
检索 搜索会议录 二次检索 AI检索
外文文献 中文文献
筛选条件:

1. AdaMV-MoE: Adaptive Multi-Task Vision Mixture-of-Experts NSTL国家科技图书文献中心

Tianlong Chen |  Xuxi Chen... -  《2023 IEEE/CVF International Conference on Computer Vision: ICCV 2023, Paris, France, 1-6 October 2023, [v.1]》 -  IEEE International Conference on Computer Vision - 2023, - 17300~17311 - 共12页

摘要:Sparsely activated Mixture-of-Experts (MoE) is becoming a promising paradigm for multi-task learning (MTL). Instead of compressing multiple tasks’ knowledge into a single model, MoE separates the para...
关键词: Training |  Instance segmentation |  Adaptation models |  Adaptive systems |  Object detection |  Benchmark testing |  Multitasking

2. DnA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment NSTL国家科技图书文献中心

Ziyu Jiang |  Tianlong Chen... -  《Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, p.20》 -  European Conference on Computer Vision - 2022, - 239~256 - 共18页

摘要:Self-supervised (SS) learning has achieved remarkable success in learning strong representation for in-domain few-shot and semi-supervised tasks. However, when transferring such representations to dow...
关键词: Self-supervised learning |  Transfer few-shot |  Low-rank

3. Learning to Optimize Differentiable Games NSTL国家科技图书文献中心

Xuxi Chen |  Nelson Vadori... -  《International Conference on Machine Learning: ICML 2023, Honolulu, Hawaii, USA, 23-29 July 2023, Part 7 of 54》 -  International Conference on Machine Learning - 2023, - 5036~5051 - 共16页

摘要:Many machine learning problems can be abstracted in solving game theory formulations and boil down to optimizing nested objectives, such as generative adversarial networks (GANs) and multi-agent reinf...

4. DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models NSTL国家科技图书文献中心

Xuxi Chen |  Tianlong Chen... -  《61st annual meeting of the Association for Computational Linguistics (ACL 2023). Long papers, vol. 11: 61st annual meeting of the Association for Computational Linguistics (ACL 2023), 9-14 July 2023, Toronto, Canada》 -  Annual meeting of the Association for Computational Linguistics - 2023, - 8208~8222 - 共15页

摘要:Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persis...

5. Efficient Lottery Ticket Finding: Less Data is More NSTL国家科技图书文献中心

Zhenyu Zhang |  Xuxi Chen... -  《International Conference on Machine Learning: ICML 2021, Online, 18-24 July 2021, Part 16 of 16》 -  International Conference on Machine Learning - 2022, - 12370~12380 - 共11页

摘要:The lottery ticket hypothesis (LTH) (Frankle & Carbin, 2018) reveals the existence of winning tickets (sparse but critical subnetworks) for dense networks, that can be trained in isolation from random...
NSTL主题词: Lottery |  FINDINGS

6. A Unified Lottery Ticket Hypothesis for Graph Neural Networks NSTL国家科技图书文献中心

Tianlong Chen |  Yongduo Sui... -  《International Conference on Machine Learning: ICML 2021, Online, 18-24 July 2021, Part 3 of 16》 -  International Conference on Machine Learning - 2022, - 1685~1696 - 共12页

摘要:With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training and inference of GNNs become increasingly expensive. Existing network weight pruning algorithms canno...
NSTL主题词: Lottery |  Neural network |  Line graph

7. Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets NSTL国家科技图书文献中心

Tianlong Chen |  Xuxi Chen... -  《International Conference on Machine Learning: ICML 2022, Baltimore, Maryland, USA, 17-23 July 2022, Part 4 of 33》 -  International Conference on Machine Learning - 2022, - 3025~3039 - 共15页

摘要:The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (i.e., winning tickets) that can be trained in isolation to match full accuracy. Despite many exciting...

8. Scalable Learning to Optimize: A Learned Optimizer Can Train Big Models NSTL国家科技图书文献中心

Xuxi Chen |  Tianlong Chen... -  《Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, p.23》 -  European Conference on Computer Vision - 2022, - 389~405 - 共17页

摘要:Learning to optimize (L2O) has gained increasing attention since it demonstrates a promising path to automating and accelerating the optimization of complicated problems. Unlike manually crafted class...

9. Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training NSTL国家科技图书文献中心

Xuxi Chen |  Wuyang Chen... -  《37th International Conference on Machine Learning: ICML 2020, Online, 13-18 July 2020, Part 2 of 15》 -  International Conference on Machine Learning - 2021, - 1510~1520 - 共11页

摘要:Many real-world applications have to tackle the Positive-Unlabeled (PU) learning problem, i.e., learning binary classifiers from a large amount of unlabeled data and a few labeled positive examples. W...
NSTL主题词: Self |  Functional training |  Training |  Calibration
检索条件作者:Xuxi Chen
  • 检索词扩展

NSTL主题词

  • NSTL学科导航