Chain-of-thought (CoT) prompting is a few-shot prompting method designed to improve a language model’s performance on reasoning tasks. Where traditional few-shot prompts involve question-answer pairs, CoT involves supplying question-process-answer triplets. The following are several salient findings from Wei, et al. (2023):

  • A large pre-trained model outperformed the state of the art, even without fine-tuning, when prompted with CoT.
  • CoT performance improvements are an emergent property; they appear only for models with at least ~100B parameters.
  • The benefit of CoT scales with task complexity.
  • Based on ablation studies, CoT appears to provide benefits beyond what derives from:
    • The tendency of CoT responses to “expend” more computation on responses;
    • Improved access to learned information through implicit guidance in CoT process examples; or
    • The implicit provision of an equation or template.