Early prompts helped models think. Modern prompts help humans trust, scan, and decide.
Modern models reason internally; explicit CoT now adds verbosity without improving accuracy.
Forces entity recall while separating thinking from writing. Aligns with how executives read.
Locks narrative before content. Reduces hallucination and mirrors executive thinking patterns.
Adds verification pass that reduces confident hallucinations and improves alignment to intent.
Separates facts, assumptions, and inference. Makes uncertainty explicit for decision-makers.
Foundation of agent systems. Lets model decide when to compute vs reason.
Each step has one job. Easier to debug and more repeatable across users and runs.
Define output schema upfront. Model extracts only what matches, reducing hallucination and enabling validation.
Interleaves reasoning and action. Model thinks, acts, observes result, then thinks again. Reduces hallucination by grounding in external observations.
Provide class definitions and optional examples. Model maps input to most appropriate class. Few-shot examples dramatically improve edge case handling.
Transform clustering into classification. First prompt LLM to generate potential labels, then assign items to labels. Combines semantic understanding with clustering.
Use LLM as a relevance judge after initial retrieval. Listwise ranking outperforms pointwise. Two-stage pipeline: fast retriever (100 docs) → LLM reranker (top 10).
Generate hypothetical answer document, then use its embedding for retrieval. Generate 5 docs and average embeddings. Works best when LLM has domain knowledge.
Sample multiple reasoning paths at temperature 0.7, select most common final answer. 10-15 samples optimal. +17.9% on GSM8K vs greedy CoT.