Zhicheng Yang, Zhijiang Guo, Yinya Huang, Yongxin Wang, Wenlei Shi, Yiwei Wang, Xiaodan Liang, Jing Tang
Accordion-Thinking enables LLMs to self-regulate reasoning step granularity, achieving efficient and readable reasoning with reduced computational overhead.
The paper presents a novel approach called Accordion-Thinking, which allows large language models (LLMs) to manage their reasoning process more efficiently. By summarizing and discarding unnecessary information during reasoning, the model reduces computational demands while maintaining accuracy. This method not only speeds up the reasoning process but also provides clear, human-readable summaries of the model's thought process. Ultimately, this approach allows for faster and more efficient problem-solving without sacrificing the quality of the solutions.