Gaurav Mantri's Personal Blog.

Prompt Patterns Every Generative AI Developer Must Know

In this post we will talk about some of patterns that you should know in order to write effective prompts. This is in continuation with my previous post about Prompts and Prompt Engineering. If you have not done so, I would strongly encourage you to read that post before continuing with this post.

We start this post by describing prompt patterns and then briefly talk about some of the commonly used prompt patterns.

So, let’s start!

What are Prompt Patterns?

If you are coming from software development background, I am pretty sure that you are aware of software design patterns that provide reusable solutions to solve common software problems and help write good and maintainable software.

Prompt patterns are very much like software design patterns. They focus on controlling the output of the text generated by Large Language Models (LLMs). Part of prompt engineering, they provide reusable solutions to write effective prompts.

If you are building Generative AI apps, chances are that you are using some of the patterns described in this post (without realizing that you are using prompt patterns 😀).

Also, as you start building Generative AI apps, you will realize that when writing prompts, you are actually using more than one pattern in a single prompt (and that’s completely ok!).

Prompt Patterns

Here are some of the commonly used prompt patterns.

One-Shot/Few-Shot Learning (or Prompting) Pattern

In this pattern, the LLM is trained (kind of) by including examples of input the user would give and the output they expect. When exactly one example is given, it is called “One-Shot Prompting” and when more than one examples are given, it is called “Few-Shot Prompting”. For example:

Input: The movie was good but it was a bit long.

Sentiment: Neutral

Input: I did not like this book.

Sentiment: Negative

Input: The food at the restaurant was yummy.


Chain Of Thought Reasoning Pattern

In this pattern, the LLM is asked to to proceed step-by-step and present all the steps involved. This is especially useful in reasoning types of questions. Using this pattern reduces the possibility of inaccuracy of outcomes and makes assessing the LLM’s response easier. For example,

When I was 10 years old, my brother was half my age. I am now 50 years old. How old is my brother? Take a step-by-step approach in your response, cite sources and give reasoning before sharing final answer.

Meta Language Creation Pattern

In this pattern, the LLM is explained about one or more special symbols, words or sentences so that the LLM understands their meaning in the conversation that follows. For example:

Keep the following in mind when answering the subsequent questions: When I say twin city or twin cities, I mean Dallas/Fort Worth and not Minneapolis/St. Paul.

Output Automater Pattern

In this pattern, the LLM is asked to generate script of some sort instead of instructions text that the user can then execute instead of following the instructions. For example:

Write a python script that would identify the open TCP ports on my Windows server and close those ports.

Flipped Interaction Pattern

In normal course of interaction we ask questions to an LLM and the LLM provides an answer to those questions. In this pattern, the LLM is encouraged to ask questions to achieve an objective. Idea behind using this pattern is that you do not know what questions to ask an LLM and would want to rely on LLM’s vast knowledge to help you. For example:

Ask me questions, one question at a time, so that I can plan a short vacation in Rome. I will be leaving from Washington DC. When I am done answering questions, create an itinerary for me based on the answers I provided.

Persona Pattern

In this pattern, the LLM is instructed to act like a certain kind of person (assume the persona in other words) and answer the question as that kind of person. For example:

Act like an economist to explain the importance of Large Language Models in the field of economics.

Audience Persona Pattern

This is opposite of Persona pattern. In this pattern, the LLM is instructed to provide an answer that is understandable by a certain kind of person (the audience). Essentially LLM sets the tone and content of the answer in such a way that is understandable by the persona set in the prompt. For example:

Explain large language model to a nine year old.

Question Refinement Pattern

In this pattern, the LLM is encouraged to provide a better version of the question asked (and ask the user if they want to use the newer version). This is especially useful when the user asking the question is not an expert in the field of question being asked and would like to rely on the knowledge an LLM has in that field. For example:

Whenever I ask a question about a software artifact’s security, suggest a better version of the question to use that incorporates information specific to security risks in the language or framework that I am using instead and ask me if I would like to use your question instead.

Alternative Approaches Pattern

In this pattern, the LLM is instructed to provide alternative answers to a given question. This is quite useful when the user asking the question is interested in exploring multiple solutions to their problem and then picking the best one. For example:

I need to travel from Baltimore, MD to Niagara Falls, NY by road. Suggest me a few alternate ways I can take. Include the pros and cons of each approach.

Cognitive Verifier Pattern

In this pattern, the LLM is forced to always subdivide a question into additional questions that can be used to provide a better answer to the original question. This is especially useful when the question being asked is very high level or the user does not have much knowledge about the question. For example,

When I ask you a question, generate three additional questions that would help you give a more accurate answer. When I have answered the three questions, combine the answers to produce the final answers to my original question.

Fact Check List Pattern

In this pattern, the LLM is instructed to produce a list of facts on which basis the LLM has provided an answer. The user can then verify the facts to validate the accuracy and truthfulness of the answer. This is a useful pattern to use when the user is not an expert in the domain of question being asked and can used these facts to verify the answer. For example,

when you generate an answer, create a set of facts that the answer depends on that should be fact-checked and list this set of facts at the end of your output. Only include facts related to authentication and authorization.

Template Pattern

In this pattern, the LLM is forced to produce an output in a specific format. The user specifies the template for output format and the placeholders and ask the LLM to produce an output in the format specified in the template by filling the placeholder content. For example,

Please create an item list for me to make a dining table and 4 chairs from scratch. I am going to provide a template for your output . <placeholder> are my placeholders for content. Try to fit the output into one or more of the placeholders that I list. Please preserve the formatting and overall template that I provide.

This is the template:

Aisle <name of aisle>: <item needed from aisle>, <qty> (<furniture used in>)

Infinite Generation Pattern

In this pattern, the LLM is instructed to generate a series of outputs without having the user to re-enter the generation prompt for each kind of output. For example,

I have the following table called Users in my SQL database to store user information:
Id nvarchar(32) Primary Key

Name nvarchar(100)

Email nvarchar(100)

CreatedDate Date

ModifiedDate Date

Write SQL statements for creating, updating, reading and deleting user records. Use the placeholders instead of actual values when creating SQL statements.

Visualization Generator Pattern

In this pattern, the LLM is instructed to generate an output in a format that can be fed to a visualization tool that accepts text as an input (e.g. Dall-E). This pattern overcomes the limitation of LLMs of not being able to create images by generating textual inputs in correct format to plug into another tool that generates the correct diagram. For example,

Here’s a simple user flow for my web application:

User is on login page. When the user logs in, application checks if the credentials are correct, then the user is taken to the dashboard. If the user credentials are incorrect, then the user is redirected back to the login page.

Create a flowchart for this user flow in Graphviz Dot format.

Game Play Pattern

In this pattern, the LLM is asked to create a game around a certain topic. This is quite useful when a user wants the LLM to generate scenarios or questions revolving around a specific topic and require them to apply problem solving or other skills to accomplish a task related to the scenario. For example,

We are going to play a trivia game checking my knowledge about Tom Cruise movies. You will ask me questions about Tom Cruise’s movies. For each question, you will give me 4 options and ask me to choose one option. If I pick correct option, you give me 10 points. If I pick incorrect option, you deduct 5 points. At the end of the game, you will tell me my score. The game will have just 3 questions.

Reflection Pattern

In this pattern, the LLM is asked to provide the rationale behind the output (along with the output). This is quite useful when a user wants to assess the LLM’s output validity as well as getting to know how the LLM came up with a particular answer. Furthermore, this pattern can also be used by users to fine tune their prompts because now they have a better understanding of how the LLM is providing the output. For example,

When you provide an answer, please explain the reasoning and assumptions behind your selection of software frameworks. If possible, use specific examples or evidence with associated code samples to support your answer of why the framework is the best selection for the task.

Refusal Breaker Pattern

In this pattern, when the LLM is not able (or refuses) to answer a question for any reason, it is encouraged to help the user by providing alternate or rephrased questions that a user can ask and also provide reasons behind not answering the question. For example,

Whenever you can’t answer a question, explain why and provide one or more alternate wordings of the question that you can’t answer so that I can improve my questions.

Context Manager Pattern

In this pattern, the LLM is encouraged to keep the conversation very focused by asking it to concentrate only on certain topics or removing certain topics on the consideration. This pattern empowers the users to have a greater control over what things LLM should consider or ignore when generating output. For example,

When analyzing the following pieces of code, only consider code complexity aspects. Do not consider formatting or naming conventions.

Recipe Pattern

In this pattern, the LLM is instructed to provide a sequence of steps based on some provided input data to achieve a stated goal. This pattern combines the Template, Alternative Approaches, and Reflection patterns. For example,

I am trying to deploy an application to the cloud. I know that I need to install the necessary dependen- cies on a virtual machine for my application. I know that I need to sign up for an account in Azure. Please provide a complete sequence of steps. Please fill in any missing steps. Please identify any unnecessary steps.

Ask for Input Pattern

In this pattern, the LLM is instructed to wait for the user’s input and generate an output based on the user’s input and other thing specified in the prompt. This is quite useful in scenarios where we do not want the LLM to automatically start generating output and rather wait for user to provide input and generate output for that input. For example,

From now on, I am going to provide you some paragraphs of texts. You will summarize each paragraph. Now ask me for the first paragraph of text.

Outline Expansion Pattern

In this pattern, the LLM is instructed to generate an outline (a list of items) and then expand on items in that list. This is specially useful when a user wants to generate really long text (a book or an article) that would exceed the output limits of the LLM. For example,

I want to write a book on Generative AI. Generate a bullet point outline for that and then ask me for the bullet point you should expand on.

Menu Actions Pattern

In this pattern, the LLM is instructed to take an action on a certain thing. Generally this pattern is used in conjunction with Outline Expansion pattern where the LLM is instructed to perform some action on the list of items generated through the Outline Expansion pattern. For example,

Whenever I write “text <bullet point> <paragraphs>”, you will write some text for the bullet point specified in <bullet point> placeholder. <paragraphs> is the placeholder is for the number of paragraphs you will write. If <paragraphs> is missing in the input, you will generate exactly one paragraph of text.

Tail Generation Pattern

In this pattern, the LLM is forced to generate a tail at the end of the output to remind the LLM of the task at hand. This is especially useful in scenarios where a user is having a long conversation with the LLM. For example,

I want to write a book on Generative AI. Generate a bullet point outline for that. At the end, ask me what bullet point to expand on.

Semantic Filter Pattern

In this pattern, the LLM is asked to remove or keep certain information in text based on specified semantic rules. This is quite useful when the user wants to remove certain sensitive information (like medical history, personally identifying information etc.) from input text. For example,

Filter this information to remove any personally identifying information or information that could potentially be used to re-identify the person.


For writing this blog post, I referenced the following resources:

If you are interested in learning more about prompt patters, I would highly recommend that you check out these resources.


This turned out to be a rather long post 😀 but I hope you have found it useful and worthy of your time. Please share your thoughts by providing comments.

[This is the latest product I'm working on]