全部 |
  • 全部
  • 题名
  • 关键词
  • NSTL主题词
  • 摘要
  • 会议名称
  • 论文-出处
  • 论文-作者
  • 论文-机构
  • 论文-DOI
  • 会议-出版者
  • 会议-出版地
  • 会议-主编
  • 会议-主办单位
  • 会议-举办地
  • ISSN
  • EISSN
  • ISBN
  • EISBN
检索 搜索会议录 二次检索 AI检索
外文文献 中文文献
筛选条件:

1. Window-Based Channel Attention for Wavelet-Enhanced Learned Image Compression NSTL国家科技图书文献中心

Heng Xu |  Bowen Hai... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 450~467 - 共18页

摘要:Learned Image Compression (LIC) models have |  achieved superior rate-distortion performance than |  traditional codecs. Existing LIC models use CNN, Transformer | , or Mixed CNN-Transformer as basic blocks. However | , limited by the shifted window attention, Swin
关键词: Learned image compression |  Window-based channel attention |  Receptive field |  Wavelet transform

2. COCA: Classifier-Oriented Calibration via Textual Prototype for Source-Free Universal Domain Adaptation NSTL国家科技图书文献中心

Xinghong Liu |  Yi Zhou -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 337~353 - 共17页

摘要:Universal domain adaptation (UniDA) aims to |  address domain and category shifts across data sources | . Recently, due to more stringent data restrictions | , researchers have introduced source-free UniDA (SF-UniDA). SF | -UniDA methods eliminate the need for direct access to
关键词: Source-Free universal domain adaptation |  Transfer learning |  Few-Shot learning

3. A Universal Structure of YOLO Series Small Object Detection Models NSTL国家科技图书文献中心

Shengchao Hu |  Xiao Liu... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 468~484 - 共17页

摘要:The YOLO series detection models play a |  crucial role in target detection tasks. However, these |  models are typically trained on datasets with standard |  angles. For datasets like Visdron2021 and Tinyperson | , there are challenges related to small, dense, and
关键词: YOLO series model |  Small object detection |  Universal structure

4. CNN Mixture-of-Depths NSTL国家科技图书文献中心

Rinor Cakaj |  Jens Mehnert... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 148~166 - 共19页

摘要:We introduce Mixture-of-Depths (MoD) for |  Convolutional Neural Networks (CNNs), a novel approach that |  enhances the computational efficiency of CNNs by |  selectively processing channels based on their relevance to |  the current prediction. This method optimizes
关键词: CNN |  Mixture-of-Depths |  Computational efficiency |  Inference speed

5. D'OH: Decoder-Only Random Hypernetworks for Implicit Neural Representations NSTL国家科技图书文献中心

Cameron Gordon |  Lachlan E. MacDonald... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 128~147 - 共20页

摘要:Deep implicit functions have been found to be |  an effective tool for efficiently encoding all |  manner of natural signals. Their attractiveness stems |  from their ability to compactly represent signals |  with little to no offline training data. Instead
关键词: Implicit neural representations |  Compression |  Hypernetworks

6. FOTV-HQS: A Fractional-Order Total Variation Model for LiDAR Super-Resolution with Deep Unfolding Network NSTL国家科技图书文献中心

Huiying Xi |  Xia Yuan... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 76~92 - 共17页

摘要:LiDAR super-resolution can improve the quality |  of point cloud data, which is critical for |  improving many downstream tasks such as object detection | , identification, and tracking. Traditional LiDAR super | -resolution models often struggle with issues like block
关键词: Super-resolution |  LiDAR |  Fractional-order total variation |  Deep unfolding

7. EQ-CBM: A Probabilistic Concept Bottleneck with Energy-Based Models and Quantized Vectors NSTL国家科技图书文献中心

Sangwon Kim |  Dasom Ahn... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 270~286 - 共17页

摘要:The demand for reliable AI systems has |  intensified the need for interpretable deep neural networks | . Concept bottleneck models (CBMs) have gained attention |  as an effective approach by leveraging human | -understandable concepts to enhance interpretability. However
关键词: Concept bottleneck model |  Energy-based model |  Vector quantization

8. SAMIF: Adapting Segment Anything Model for Image Inpainting Forensics NSTL国家科技图书文献中心

Lan Zhang |  Xinshan Zhu... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 303~319 - 共17页

摘要:Image inpainting technologies pose increasing |  threats to the security of image data through malicious |  use. Therefore, image inpainting forensics is |  crucial. The Segment Anything Model (SAM) is a powerful |  universal image segmentation model for various downstream
关键词: Image inpainting forensics |  Segment anything model |  Cross-domain alignment fusion |  Artifact feature generator

9. HT-SSPG: Hierarchical Transformers for Semantic Surface Point Generation in 3D Object Detection NSTL国家科技图书文献中心

Wenhao Kong |  Xiaowei Zhang -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 20~37 - 共18页

摘要:Currently, the incomplete point cloud |  structure in LiDAR point clouds has become the primary |  challenge for improving detector performance. Point cloud |  completion methods address this issue by adding more points |  to regions of interest, however, due to imprecise
关键词: 3-D object detection |  Semantic point generation |  Hierarchical transformers

10. UNet--: Memory-Efficient and Feature-Enhanced Network Architecture Based on U-Net with Reduced Skip-Connections NSTL国家科技图书文献中心

Lingxiao Yin |  Wei Tao... -  《Computer Vision - ACCV 2024,Part VII》 -  Asian Conference on Computer Vision - 2025, - 185~201 - 共17页

摘要:U-Net models with encoder, decoder, and skip | -connections components have demonstrated effectiveness in a |  variety of vision tasks. The skip-connections transmit |  fine-grained information from the encoder to the |  decoder. It is necessary to maintain the feature maps
关键词: Skip-connection |  Memory |  U-Net
检索条件出处:Computer Vision - ACCV 2024,Part VII
  • 检索词扩展

NSTL主题词

  • NSTL学科导航