From 50209f55ae1bea0ce54b3fbe9281d563822dc469 Mon Sep 17 00:00:00 2001 From: anthology-assist Date: Wed, 11 Dec 2024 19:45:14 -0600 Subject: [PATCH] Paper Revision 2024.findings-emnlp.646, closes #4142. --- data/xml/2024.findings.xml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/data/xml/2024.findings.xml b/data/xml/2024.findings.xml index 1588a597e8..cddb3a9a75 100644 --- a/data/xml/2024.findings.xml +++ b/data/xml/2024.findings.xml @@ -28024,9 +28024,11 @@ Dit-YanYeungHong Kong University of Science and Technology 11057-11070 Large Language Models (LLMs) have demonstrated impressive capabilities in a wide range of natural language processing tasks when leveraging in-context learning. To mitigate the additional computational and financial costs associated with in-context learning, several prompt compression methods have been proposed to compress the in-context learning prompts. Despite their success, these methods face challenges with transferability due to model-specific compression, or rely on external training data, such as GPT-4. In this paper, we investigate the ability of LLMs to develop a unified compression method that discretizes uninformative tokens, utilizing a self-supervised pre-training technique. By introducing a small number of parameters during the continual pre-training, the proposed Selection-p produces a probability for each input token, indicating whether to preserve or discard it. Experiments show Selection-p achieves state-of-the-art performance across numerous classification tasks, achieving compression rates of up to 10 times while experiencing only a marginal 0.8% decrease in performance. Moreover, it exhibits superior transferability to different models compared to prior work. Additionally, we further analyze how Selection-p helps maintain performance on in-context learning with long contexts. - 2024.findings-emnlp.646 + 2024.findings-emnlp.646 chung-etal-2024-selection 10.18653/v1/2024.findings-emnlp.646 + + Corrected email addresses. Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities