Tatsuro Inaba (稲葉 達郎) [日本語]

Portrait of Tatsuro Inaba

Tatsuro Inaba is a first-year Ph.D. student in the NLP Department at MBZUAI. His research interests include interpretability of language/music models, training dynamics, and preventing plagiarism in generative models.

News

  • 08/2025: A first-author paper was accepted at EMNLP 2025.
  • 01/2025: A co-authored paper was accepted at NAACL 2025.
  • 09/2024: A first-author paper was accepted at APSIPA ASC 2024.
  • 05/2023: A first-author paper was accepted at ACL 2023.

Career

Education

  • 08/2025-present: Ph.D. student, NLP Department, Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI).
  • 04/2025-08/2025: Doctoral program, Graduate School of Information Sciences, Tohoku University (withdrew Aug 2025).
  • 04/2023-03/2025: M.S. student, Graduate School of Informatics, Kyoto University.
  • 04/2019-03/2023: B.Eng. student, Undergraduate School of Electrical and Electronic Engineering, Kyoto University.

Professional Experience

  • 12/2024-08/2025: Research Assistant, Research and Development Center for Large Language Models, National Institute of Informatics.
  • 04/2025-08/2025: JSPS Research Fellow (DC1), Tohoku University.
  • 01/2024-02/2024: Visiting Student, MBZUAI. [blog]
  • 11/2023-11/2024: Research Assistant, Kyoto University.
  • 09/2023-10/2023: Machine Learning Engineer (Intern), Recruit Co., Ltd.
  • 08/2023-09/2023: Research Intern, Preferred Networks, Inc. [blog]
  • 11/2022-07/2023: Research Engineer (Part-time), DATAGRID Inc.

Grants/Awards

Publications

Preprints

  • Ryosuke Takahashi, Tatsuro Inaba, Kentaro Inui, and Benjamin Heinzerling.
    "TopK Language Models.", Jun 2025.
    [arXiv]

Refereed Papers

  • Tatsuro Inaba, Go Kamoda, Kentaro Inui, Masaru Isonuma, Yusuke Miyao, Yohei Oseki, Yu Takagi, and Benjamin Heinzerling.
    "How a Bilingual LM Becomes Bilingual: Tracing Internal Representations with Sparse Autoencoders."
    The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025 Findings)
    [arXiv (previous version), new version will be posted soon]
  • Go Kamoda, Benjamin Heinzerling, Tatsuro Inaba, Keito Kudo, Keisuke Sakaguchi, and Kentaro Inui.
    "Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference."
    The 2025 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2025 Findings)
    [arXiv]
  • Tatsuro Inaba, Kazuyoshi Yoshii, and Eita Nakamura.
    "On the Importance of Time and Pitch Relativity for Transformer-Based Symbolic Music Generation."
    The 16th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2024)
    [paper]
  • Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, and Sadao Kurohashi.
    "MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting"
    The 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023 Main)
    [code, arXiv, poster]

Domestic Conference (Japan)

  • 稲葉 達郎, 乾 健太郎, 宮尾 祐介, 大関 洋平, Benjamin Heinzerling, 高木 優.
    「スパースオートエンコーダーを用いた大規模言語モデルのチェックポイント横断分析」
    言語処理学会第31回年次大会, 2025.
    [paper]
  • 鴨田 豪, Benjamin Heinzerling, 稲葉 達郎, 工藤 慧音, 坂口 慶祐, 乾 健太郎.
    「言語モデルのパラメータから探るDetokenizationメカニズム」
    言語処理学会第31回年次大会, 2025.
    [paper]
  • 稲葉 達郎, 吉井 和佳, 中村 栄太.
    「音楽生成における時間と音高相対性の重要性」
    第141回音楽情報科学研究発表会 (SIGMUS), 2024.
    [paper]
  • 稲葉 達郎, 清丸 寛一, Fei Cheng, 黒橋 貞夫.
    「大規模言語モデルに基づく複数の外部ツールを利用した推論フレームワーク」
    言語処理学会第29回年次大会, 2023.
    [paper, poster]
  • 稲葉 達郎, 藤井 拓郎, 小原 涼馬, 柴田 幸輝.
    「三言語モデル寄れば文殊の知恵を」
    NLP若手の会 (YANS) 第18回シンポジウム, 2023.
    [poster]

Talks/Activities

  • 06/2025: Talk and tutorials (PyTorch/Transformer/Music Generation) at Kyushu University. [talk slides]
  • 03/2025: Talk at Tohoku NLP Group. [slides]
  • 08/2024: 論文紹介, 第16回最先端NLP勉強会 [slides]
  • 09/2023: 学会記事, 「大規模言語モデルに基づく複数の外部ツールを利用した推論フレームワーク」の研究.自然言語処理 30巻 3号 p.1100–1104.[paper]
  • 08/2023: 論文紹介, 第15回最先端NLP勉強会 [slides]