Member-only story

A Few Prompt Engineering Techniques

KEEP IN TOUCH | THE GEN AI SERIES

Aaweg I
5 min readJan 17, 2024

Prompt Engineering is an art of creating input requests for LLMs that can lead to an envisaged output. Let’s explore a few:

1. LEAST TO MOST PROMPTING

Inference is about reaching a conclusion based on evidence and reasoning. Evidence is the knowledge source an LLM was trained on. And reasoning ability can be given to LLM by proving a few example on how to reason and use evidence.

So what we do, we

  1. Decompose a complex problem into a series of simpler sub-problems.
  2. And subsequently solve for each of these sub-questions.

Solving each subproblem is facilitated by the answers to previously solved subproblems. Hence Least To Most Prompting is a progressive sequence of prompts to reach a final conclusion. It enables the LLM to handle complex reasoning.

2. SELF ASK PROMPTING

Self-Ask Prompting is a sort of addition of Direct and Chain-Of-Thought prompting.

In this method, essentially, the LLM decomposes the question into smaller follow-up questions. It knows when the final answer is reached and can move from follow up intermediate answers to a final answer.

--

--

No responses yet