全部 |
  • 全部
  • 题名
  • 作者
  • 机构
  • 关键词
  • NSTL主题词
  • 摘要
检索 二次检索 AI检索
外文文献 中文文献
筛选条件:

1. SafeLLMs: A Benchmark for Secure Bilingual Evaluation of Large Language Models NSTL国家科技图书文献中心

Wenhan Liang |  Huijia Wu... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 437~448 - 共12页

摘要:Since the advent of the GPT-3.5 model | , numerous large language models (LLMs) have emerged in |  China. With the increasing number of users, the |  security of these models has garnered extensive attention |  from researchers. However, the current evaluation
关键词: Model evaluation |  Security assessment |  Large language models

2. TARGET: Temp late-Transferable Backdoor Attack Against Prompt-Based NLP Models via GPT4 NSTL国家科技图书文献中心

Zihao Tan |  Qingliang Chen... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 398~411 - 共14页

摘要:Prompt-based learning has been widely applied |  in many low-resource natural language processing |  (NLP) tasks such as few-shot scenarios. However, this |  paradigm has been shown to be vulnerable to backdoor |  attacks. Most of the existing attack methods focus on
关键词: Prompt-based learning |  Backdoor attack |  Few-shot classification tasks

3. ConFit: Contrastive Fine-Tuning of Text-to-Text Transformer for Relation Classification NSTL国家科技图书文献中心

Jiaxin Duan |  Fengyu Lu... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 16~29 - 共14页

摘要:Relation classification (RC) is commonly the |  second step in a relation extraction pipeline, which |  asserts the relation of two identified entities based on |  their context. The latest trend for dealing with the |  task resorts to pre-trained language models (PLMs
关键词: Relation classification |  Fine-tuning |  Contrastive learning

4. DRAK: Unlocking Molecular Insights with Domain-Specific Retrieval-Augmented Knowledge in LLMs NSTL国家科技图书文献中心

Jinzhe Liu |  Xiangsheng Huang... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 255~267 - 共13页

摘要:Large Language Models (LLMs) typically |  manifest knowledge gap in specialized applications due to |  pre-training on generalized textual corpora | . Although fine-tuning and modality alignment aim to bridge |  this gap, their inability to provide comprehensive
关键词: Retrieval-augmented knowledge |  Knowledge injection |  Biomolecular domain

5. Prompt Debiasing via Causal Intervention for Event Argument Extraction NSTL国家科技图书文献中心

Jiaju Lin |  Jie Zhou... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 96~108 - 共13页

摘要:Prompt-based methods have become increasingly |  popular among information extraction tasks (e.g., event |  argument extraction), especially in low-data scenarios | . By formatting a fine-tuning task into a pre | -training objective, prompt-based methods resolve the data
关键词: Event argument extraction |  Prompt learning |  Causal intervention

6. CouBRE: Counterfactual NLI For Low-Resource Biomedical Relation Extraction NSTL国家科技图书文献中心

Jin Zhong |  Yongbin Liu... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 109~121 - 共13页

摘要:Biomedical relation extraction is a core |  problem in biomedical natural language processing, whose |  goal is to classify the relations between entity |  mentions within a classified given text, being modeled as |  a classification method. There has been some
关键词: Biomedical relation extraction |  Natural language inference |  Counterfactual inference |  Debias

7. HTCSI: A Hierarchical Text Classification Method Based on Selection-Inference NSTL国家科技图书文献中心

Yiming Xu |  Jianzhou Feng... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 307~318 - 共12页

摘要:Hierarchical Text Classification (HTC) is a |  complex task that involves categorizing text within a |  hierarchical labeling system. Current methodologies primarily |  focus on integrating hierarchical information into |  encoding models to achieve enhanced text representations
关键词: Hierarchical text classification |  Large language models |  Chain of thought

8. Bread: A Hybrid Approach for Instruction Data Mining Through Balanced Retrieval and Dynamic Data Sampling NSTL国家科技图书文献中心

Xinlin Zhuang |  Xin Mao... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 229~240 - 共12页

摘要:Recent advancements in Instruction Tuning (IT | ) have shown promise for aligning Large Language Models |  (LLMs) with users' intentions, yet its efficacy is |  often compromised by dependence on high-quality |  datasets. Previous works have concentrated on the
关键词: Large language models |  Instruction tuning |  Data selection

9. Span-Based Chinese Few-Shot NER with Contrastive and Prompt Learning NSTL国家科技图书文献中心

Feiyang Ye |  Peichao Lai... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 43~55 - 共13页

摘要:For Chinese Named Entity Recognition (NER | ) tasks, achieving better performance with fewer |  training samples remains a challenge. Previous works |  primarily focus on enhancing model performance in NER by |  incorporating additional knowledge to construct entity
关键词: Named entity recognition |  Contrastive learning |  Few-shot learning

10. Retrieval-Augmented Code Generation for Universal Information Extraction NSTL国家科技图书文献中心

Yucan Guo |  Zixuan Li... -  《Natural Language Processing and Chinese Computing,Part II》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 30~42 - 共13页

摘要:Information Extraction (IE) aims to extract |  structural knowledge (e.g., entities, relations, events | ) from natural language texts. Recently, Large Language |  Models (LLMs) with code-style prompts have demonstrated |  powerful capabilities in IE tasks. However, adopting code
关键词: Information extraction |  Code generation |  Large language model |  Information retrieval
检索条件出处:Natural Language Processing and Chinese Computing,Part II
  • 检索词扩展

NSTL主题词

  • NSTL学科导航