Empowering Language Models to Reason and Act

KEEP IN TOUCH | THE GEN AI SERIES

Rahul S
7 min readJan 30, 2024

PROMPTING

In in-context learning, we prompt language models to learn from examples.

RAG is an expansion of in-context learning in which we inject information retrieved from a documen-set into the prompt, thus allowing a language model to make inferences on never before seen information.

What we want the language model to do, we prompt it with a prompt. But a single prompt makes the language model hallucinate (it gives us a response that is fabricated for fluency in mind), even if it has now a context in the form of a knowledge source.

REASONING

The idea is to guide the language model to find the answers. That is, make it able to reason. That we do with the help of so-called agents.

An agent allows a language model to break a task into many steps, then execute those steps. One of the first big breakthroughs in this domain was “Chain of Thought Prompting” (proposed in this paper).

Chain of thought prompting is a form of in-context learning which uses examples of logical reasoning to teach a model how to “think through” a problem.

--

--