Akari Asai

Ph.D. student @ Paul G. Allen School of Computer Science & Engineering, University of Washington
Visiting Student Researcher @ Meta AI

prof_pic.jpg

I am currently in my fnial year of pursuing a Ph.D. in NLP at Paul G. Allen School of Computer Science & Engineering, University of Washington. I am fortunate to be advised by Prof. Hannaneh Hajishirzi. I am also spending some time at Meta AI Research as a visiting student researcher, under the supervision of Dr. Wen-tau Yih. Prior to joining UW, I obtained a B.E. in Electrical Engineering and Computer Science from The University of Tokyo, Japan.

I am on the academic job market this year! Please feel free to reach out if you’d like to discuss opportunities.

My primary research interests are centered around natural language processing and machine learning. My recent research focuses on large language models (LLMs) and Retrieval-Augmented Language Models, which address many of inherent limitations in LLMs by dynamically retrieving and incorporating external knowledge at inference time. More specifically,

My work has received multiple paper awards at conferences like ACL and NeurIPS workshop, and has been featured in major media outlets such as Forbes and MIT Technology Review. I’m honored to be named among the EECS Rising Stars (2024), the IBM Glable Ph.D. Fellow (2022-2023) and recognized as an MIT Technology Review Innovator Under 35 from Japan (2024).

I am also passionate about teaching, mentoring and helping students to learn research, especially students from underrepresented groups. I have been the Head TA for CSE473: Intro to AI (undergrad) and CSE599J: Data-centric ML (graduate) at UW. To reduce the barrier to start research or Ph.D. in this area, I’m hosting weekly office hours open to everyone (please sign up from Calendly!), and am a mentor for UW CSE Ph.D. Pre-Application Mentorship Service (PAMS).

Update (September 2024): I have temporarily paused my public office hours. If you’re seeking feedback on your Ph.D. application materials or have questions about the UW CSE Ph.D. program, I highly recommend applying to the UW CSE Ph.D. Pre-Application Mentorship Service (PAMS), and exploring similar programs at other institutions. Unfortunately, I won’t be able to mentor new students during the 2024-2025 academic year. If you’re interested in collaborating with students from H2Lab, please submit an inquiry through our group website’s inqury.

news

Oct 31, 2024 I’m hornoed to be chosen as MIT Technology Review Innovators Under 35 from Japan! See the MIT Technology Review article about my work on retrieval-augmented LMs to build more reliable LM-based systems.
Oct 22, 2024 We released Pangea, a new state-of-the-art multilingual and multimodal LLM! Check out our demo!
Sep 25, 2024 Scaling of retrieval datastore has been accepted at NeurIPS!
Sep 19, 2024 CopyBench has been accepted at EMNLP as a main conference paper!
Sep 17, 2024 I gave a lecture, “Retrieval-augmented LMs: Past, Present and Future” at CMU (Large Language Models: Methods and Applications)

selected publications

See my full publications at the publication page!

  1. Scaling Retrieval-Based Language Models with a Trillion-Token Datastore
    Rulin Shao ,  Jacqueline He ,  Akari Asai ,  Weijia Shi ,  Tim Dettmers , and 3 more authors
    In Advances in Neural Information Processing Systems (NeurIPS) , 2024
  2. Fine-grained Hallucination Detection and Editing for Language Models
    Abhika Mishra ,  Akari Asai ,  Yizhong Wang ,  Vidhisha Balachandran ,  Graham Neubig , and 2 more authors
    In Conference on Language Modeling (COLM) , 2024
  3. Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
    Akari Asai ,  Zeqiu Wu ,  Yizhong Wang ,  Avirup Sil ,  and  Hannaneh Hajishirzi
    In The Twelfth International Conference on Learning Representations (ICLR; Oral, Top 1%) , 2024
  4. Reliable, Adaptable, and Attributable Language Models with Retrieval
    Akari Asai ,  Zexuan Zhong ,  Danqi Chen ,  Pang Wei Koh ,  Luke Zettlemoyer , and 2 more authors
    arXiv preprint, 2024
  5. When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
    Alex Mallen* ,  Akari Asai* ,  Victor Zhong ,  Rajarshi Das ,  Daniel Khashabi , and 1 more author
    In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL; Oral, Best Video Paper Award – Most Viewed) , 2023
  6. Task-aware Retrieval with Instructions
    Akari Asai ,  Timo Schick ,  Patrick Lewis ,  Xilun Chen ,  Gautier Izacard , and 3 more authors
    In Findings of the Association for Computational Linguistics: ACL 2023 (Findings Spotlight) , 2023
  7. Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks
    Akari Asai ,  Matt Gardner ,  and  Hannaneh Hajishirzi
    In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL; Oral) , 2022
  8. One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval
    Akari Asai ,  Xinyan Yu ,  Jungo Kasai ,  and  Hanna Hajishirzi
    In Advances in Neural Information Processing Systems (NeurIPS) , 2021
  9. XOR QA: Cross-lingual Open-Retrieval Question Answering
    Akari Asai ,  Jungo Kasai ,  Jonathan Clark ,  Kenton Lee ,  Eunsol Choi , and 1 more author
    In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL; Oral) , 2021
  10. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
    Ikuya Yamada ,  Akari Asai ,  Hiroyuki Shindo ,  Hideaki Takeda ,  and  Yuji Matsumoto
    In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2020
  11. Learning to retrieve reasoning paths over wikipedia graph for question answering
    Akari Asai ,  Kazuma Hashimoto ,  Hannaneh Hajishirzi ,  Richard Socher ,  and  Caiming Xiong
    In International Conference on Learning Representations (ICLR) , 2020