나정우 석사 졸업생, 권준 석박통합과정, 최은성 석박통합과정 EMNLP 2025 국제 학술대회 논문 채택
21 Aug 2025
나정우 석사 졸업생, 권준 석박통합과정, 최은성 석박통합과정 EMNLP 2025 국제 학술대회 논문 채택
21 Aug 2025
DIAL 연구실 소속 인공지능학과 나정우(석사 졸업생, 공동 제1저자) 학생, 권준(석박통합과정, 공동 제1저자) 학생, 최은성(석박통합과정, 제3저자) 학생, 이종욱(교신저자) 교수가 참여한 논문 "Multi-view-guided Passage Reranking with Large Language Models "이 자연어처리 분야 최우수 국제 학술대회인 The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025)에 게재 승인되었으며 오는 11월에 발표될 예정입니다.
Abstract
Recent advances in large language models (LLMs) have shown impressive performance in passage reranking tasks. Despite their success, LLM-based methods still face challenges in efficiency and sensitivity to external biases. (i) Existing models rely mostly on autoregressive generation and sliding window strategies to rank passages, which incurs heavy computational overhead as the number of passages increases. (ii) External biases, such as positional or semantic bias, hinder the model’s ability to accurately represent passages and the input-order sensitivity. To address these limitations, we introduce a novel passage reranking model, called Multi-View-guided Passage Reranking (MVP). MVP is a non-generative LLM-based reranking method that encodes query–passage information into diverse view embeddings without being influenced by external biases. For each view, it combines query-aware passage embeddings to produce a distinct anchor vector, used to directly compute relevance scores in a single decoding step. Besides, it employs an orthogonal loss to make the views more distinctive. Extensive experiments demonstrate that MVP, with just 220M parameters, matches the performance of much larger 7B-scale fine-tuned models while achieving a 100× reduction in inference latency. Notably, the 3B-parameter variant of MVP achieves state-of-the-art performance on both in-domain and out-of-domain benchmarks.