Large Language Model Reasoning Failures
Peiyang Song, Pengrui Han, Noah Goodman, Large Language Model Reasoning Failures, arXiv:2602.06176v1 [cs.AI] https://doi.org/10.48550/arXiv.2602.06176
Abstract: Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios. To systematically understand and address these shortcomings, we present the first comprehensive survey dedicated to reasoning failures in LLMs. We introduce a novel categorization framework that distinguishes reasoning into embodied and non-embodied types, with the latter further subdivided into informal (intuitive) and formal (logical) reasoning. In parallel, we classify ...