Prompt Engineering: LLM Prompting Notes for RAG

KEEP IN TOUCH | THE GEN AI SERIES

Rahul S
3 min readFeb 13, 2024

1. Thread of Thought(ThoT)

Paper: Thread of Thought Unraveling Chaotic Contexts

LLMs face challenges in managing chaotic contexts, often resulting in unintentional omissions of crucial details.

“Thread of Thought” (ThoT) breaks the context down into parts and instructs the model to both summarize and analyze extensive contexts while selectively choosing relevant information.

These are structured prompts. They include step-by-step dissection, summarization, and critical evaluation.

2. Chain of Note(CoN)

Paper: Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models

RAG systems often retrieve irrelevant data or do not know if they have enough context to provide an accurate response. So, they may miss the subtleties present in questions or documents, especially in intricate or indirect inquiries.

Also, response generation becomes particularly difficult when retrieving documents featuring conflicting data. The model must determine which information is credible or relevant, despite contradictions.

Chain-of-note (CoN) enhances…

--

--