[논문리뷰] Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language ModelsarXiv에 게시된 'Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Prompt Engineering#Large Language Models#Causal Attention#Multiple-Choice QA#Prompt Order Sensitivity#Information Bottleneck#Decoder-only Transformers2026년 1월 21일댓글 수 로딩 중
[논문리뷰] Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMsKatharina von der Wense이 arXiv에 게시한 'Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Multiple-Choice QA#Tokenization#Prompt Sensitivity#Accuracy#Calibration#Model Ranking2025년 9월 19일댓글 수 로딩 중