Gaurav Mantri's Personal Blog.

Generative AI – All About Prompts

In this short post, we will talk about prompts. We will talk about what prompts are, prompt engineering and why it is such a big deal. We will also briefly talk about prompt patterns.

So let’s begin!

Prompts

In very simple terms, prompts are inputs to a large language model (LLM). Based on the prompt, an LLM provides desired output.

If you are coming from database world, you write queries to extract information from a database. If you were to imagine an LLM as a database (that contains massive information), the prompts are the queries to the LLM based on which the LLM generates an output.

However, unlike database queries which follows a certain syntax (T-SQL, for example) and are bound by rules, prompts do not have rigid syntax or rules. You write prompts in natural language (which an LLM understands very well) and structure the text in your prompt in such a way so that you get the output you desire.

Prompt Engineering

To me, prompt engineering is the art and science of writing effective prompts.

When I first started working with Generative AI, I thought why folks are making big deal about prompt engineering. After all, you want some information from an LLM and to get that information, you write some text (prompt) and should get the output you want. I was proven wrong very shortly 😀.

Think of an LLM as a very smart human assistant you have got who is in possession of a great deal of knowledge (and is eager to show it off). Unless you tell this human assistant what information you want and how you want it, you are going to get very wild results. Effectively constraining an LLM to produce the output you want is essentially prompt engineering.

A few things that you can keep in mind when creating prompts:

Token size & costs

LLMs are often constrained by the size of the input you can provide (also known as token size). Furthermore, you are charged by the token usage. So when creating the prompts, you will need to be creative and surgical in your approach to ensure that you do not exceed those limits and also keep the costs under control.

Output size, tone and format

LLMs are extremely creative and are capable of producing huge amounts of text data.

With proper prompting you can control how much data you would want LLM to output. For example, including “Provide an answer in no more than 500 words” in the prompt will force an LLM to constrain the output in approximately those many words.

With proper prompting, you can also control the tone of the output. For example, having a prompt like “Explain Newton’s 3rd law of motion in a poem” will produce an entirely different output than having a prompt like “Explain Newton’s 3rd law of motion”.

With proper prompting, you can also control the format in which you want to see the data (default being text). For example, including “Provide an answer in JSON format with the following structure (your JSON structure)” in the prompt will force an LLM to provide the output in JSON format.

One-shot and few-shots prompting

Some times you would want to teach an LLM by including some examples specific to your use case in the prompt. This is known as one-shot prompting (when only one example is included) or few-shots prompting (when more than one examples are included). Both of these techniques have proven to be very effective in getting desired outputs from an LLM.

Prompt Injection

Prompt injection is a way for a user of your Generative AI app to get unintended information from an LLM. For example, let’s say your Generative AI app is supposed to provide information about only specific topics however through prompt injection, a user may be able to get information about other topics. Through proper prompt engineering, one can prevent prompt injection.

Prompt Patterns

Much like software design patterns, there are well established patterns that will help you write effective prompts. One-shot and few-shot prompting is an example of prompt patterns. We will talk about these prompt patterns in subsequent posts, however you can go to https://arxiv.org/abs/2302.11382 to learn more about these patterns.

Resources

Following are some resources I used to learn about prompts and prompt engineering:

Summary

That’s it for this post. I hope you have found the information useful. Please share your thoughts by providing comments.


[This is the latest product I'm working on]