이동철 석사졸업생, 김혜영 박사과정 SIGIR 2026 국제 학술대회 논문 채택
05 Apr 2025
이동철 석사졸업생, 김혜영 박사과정 SIGIR 2026 국제 학술대회 논문 채택
05 Apr 2025
DIAL 연구실 소속 인공지능학과 이동철(석사졸업생, 공동 1저자) 학생, 김혜영(박사과정, 공동 1저자) 학생, 이종욱(교신저자) 교수가 참여한 논문 "ACE: Anisotropy-Controllable Embedding for LLM-enhanced Sequential Recommendation"이 데이터마이닝 분야 최우수 국제 학술대회인 ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2026, short paper)에 최종 게재가 승인되었으며 오는 7월에 발표될 예정입니다.
Abstract
Recent advances in the LLM-as-Extractor paradigm leverage large language models (LLMs) to transfer semantically rich item embeddings into sequential recommendation (SR) backbones. However, LLM-generated embeddings often suffer from strong anisotropy. Most vectors are concentrated in similar directions, resulting in a geometric imbalance that makes it difficult to adapt to collaborative signals during fine-tuning. To address this challenge, we propose Anisotropy-Controllable Embedding (ACE), which explicitly controls the anisotropy of LLM-generated embeddings. Specifically, ACE utilizes a linear autoencoder (LAE) to reshape the embedding distribution while preserving its semantic structure. In this process, the L2-regularization term mitigates the anisotropy by controlling the dispersion of embedding dimensions, while the reconstruction loss maintains semantic relationships among items. That is, ACE balances geometric uniformity and semantic embedding preservation for more stable learning. Extensive experiments demonstrate that ACE consistently outperforms existing LLM-enhanced SR models, yielding improvements of up to 12.4% and 11.8% in Recall@20 and NDCG@20, respectively.