'thinking' soa estranho, ne? trechos de abstracts aqui:
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers.
Long chain-of-thought (CoT) significantly enhances large language models' (LLM) reasoning capabilities. However, the extensive reasoning traces lead to inefficiencies and an increased time-to-first-token (TTFT).
além de melhorar as respostas, outro objetivo é aumentar a explicabilidade, ou seja, como e porque a IA chegou na resposta, ao inves de ser uma caixa-preta magica. (explicabilidade já era um dos holy graals das redes neurais ha decadas)
Esse 'illusion of thinking' paper vale a pena ler, mostra que, aumentando o tamanho do problema, em determinado ponto a IA colapsa rapidamente.

Apple Machine Learning Research
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…