In any Generative AI application, prompts are the heart and soul of the application. To get the most out of an LLM, every Gen AI developer must write effective prompts.
Problem
But writing prompts is hard! Believe me, it is hard :). In fact, if you want to torture a developer, ask them to write prompts :D.
In order to write effective prompts, as a prompt writer:
- You must have subject matter expertise.
- You must think very clearly and
- You must be able to communicate your thoughts very clearly as well in your prompts.
And, there in lies the problem.
I read a book on UX some time back and something that mentioned there stayed with me:
The problem with developers designing the UX of an application is that if they think they can use it, everybody else can.
This applies very well to us developers as well when we are building Gen AI applications. When we write prompts, we think as if we are going to answer the user questions and the same gets reflected in our prompts as well.
Recently, I was building an application where I want an LLM to rephrase the question asked by the user. As a developer, this is what I came up with:
You are an AI assistant expert in Azure and language comprehension. Please rephrase the following question so that it is clear for you to answer. While rephrasing, please remember the following things: - It is ok to return the original question if it is already clear. - Please use chat history given below to better rephrase the question. Chat history contains previous questions asked by the user. - Do not answer the question while rephrasing. Begin Chat History ================== Question: User question Response: LLM response End Chat History ================== User question: "How many storage accounts?"
As a developer, I thought this should be good enough for an LLM to answer. Unfortunately, that was not the case. At times, this would work but more than often I saw some random text outputted by the LLM.
So, how do we solve this?
Solution: Use LLM to create the prompt
Then it dawned on me, why not ask LLM to create the prompt. It makes sense to me for multiple reasons:
- LLMs are the subject matter expert as they are trained on all kinds of data that we (as developers) may not be aware of.
- When properly instructed and controlled, they can not only think clearly but also communicate properly as well.
- Most importantly, ultimately it is the LLM that is going to answer the prompt so it would be completely logical for it to write the prompt in a language and include all the necessary details so that it can understand when it comes to executing the prompt.
Steps
Here are the steps I took for asking an LLM to come up with a prompt.
Persona Setup
I started by giving my LLM a persona (and stroked their ego a bit :)). This is what I did:
You are the world's best prompt engineer. In fact, no one can write prompts like you do.
Describe the Ask
Then, I clearly described the ask. At first, I provided very basic instruction like “Write a prompt to rephrase a question” and it came up with something like “Craft a prompt that ingeniously transforms a direct query into an artful, indirect expression“.
So, I put some more thoughts into it and came up with the following:
You are helping an AI developer with writing prompts. AI developer wants to write a prompt that would rephrase a question asked by the user of the application. User would ask a question that may or may not be clear. What user wants to do is use an LLM to rephrase/reword the question so that it becomes clear for the LLM to answer the rephrased/reworded question. Please be as comprehensive as possible when writing a prompt for this.
and this is what LLM came up with:
Title: Reframing and Clarifying User Questions for Improved LLM Understanding Prompt: An application user has asked a question that may not be clear in its current form. Your task is to rephrase or reword the question in a way that improves its clarity for a Language Model (LLM) to answer it. Given Question: "{user_question}" Please rephrase or reword this question to increase its clarity and specificity for a language model. Consider identifying the key elements or concepts within the question, ensuring the language is precise, and avoiding any ambiguity or overly complex language. Your goal is to create a revised question that maintains the original intent, but is more easily understood by an LLM. Revised Question: ""
Much better, right?
However, I forgot to include about chat history so modified the prompt again like this:
Now let's say the question is based on some previous questions. For example, the user may have previously asked "What is the capital of India" to which LLM has responded with "New Delhi". Now the user has asked "How many states are in there". This question in itself is not complete and the previous question will be included in the chat history. Based on this information, can you rewrite the prompt?
LLM, replied back with the following:
Title: Contextual Rewording of User Questions for LLM Understanding Prompt: An application user has asked a question that may not be clear in its current form and also relies on the context of previous questions and answers. Your task is to rephrase or reword the question, taking into account the previous conversation history, to improve its clarity for a Language Model (LLM) to answer it. Previous Question and Answer: Question: "{previous_question}" Answer: "{previous_answer}" Current Question: "{user_question}" Given this context, please rephrase or reword the current question to increase its clarity and specificity for a language model. Consider identifying the key elements or concepts within the question, ensuring the language is precise, and avoiding any ambiguity or overly complex language. Remember to incorporate the context provided by the previous question and answer. Your goal is to create a revised question that maintains the original intent, but is more easily understood by an LLM when considering the previous conversation. Revised Question: ""
I realized that there can be multiple questions in the chat history that I did not mention in the previous prompt, so I submitted another prompt:
Now assume there are many previous questions and answers. Please revise the prompt based on this information.
LLM, graciously obliged me with the following:
Title: Contextual Rewording of User Questions Based on Conversation History for LLM Understanding Prompt: An application user has asked a question that may not be clear in its current form and relies on the context of multiple previous questions and answers. Your task is to rephrase or reword the question, taking into account the conversation history, to improve its clarity for a Language Model (LLM) to answer it. Conversation History: "{previous_questions_and_answers}" Current Question: "{user_question}" Given this context, please rephrase or reword the current question to increase its clarity and specificity for a language model. Consider identifying the key elements or concepts within the question, ensuring the language is precise, and avoiding any ambiguity or overly complex language. Remember to incorporate the context provided by the previous questions and answers. Your goal is to create a revised question that maintains the original intent, but is more easily understood by an LLM when considering the conversation history. Revised Question: ""
So, so much better than my original prompt :).
Test
I was quite satisfied with the prompt. However before I could use it in the application, I wanted to see if the prompt is working fine.
So I used LLM again and fed it some questions and asked it to reply to those questions using the prompt it wrote.
Here’s what I asked:
Now let's test the prompt above with the following questions. Your task is to take each question and run the prompt and output both the prompt and the result. With each question, include the previous questions and responses in the chat history. Here are the questions: 1. Does xyz storage account has tags? 2. what is its type? 3. Where it is located? 4. What are the services supported by it? 5. Please give me the storage endpoints. 6. Does abc storage account has tags? 7. what is its type? 8. Where it is located? 9. What are the services supported by it?
And this is how I responded:
Sure, I'll provide you with the prompts and the expected results for all 9 questions. I'll make assumptions for LLM responses to complete the exercise. 1. Conversation History: "" Current Question: "Does xyz storage account has tags?" Prompt: Does the storage account named 'xyz' have any assigned tags? 2. Conversation History: "Question: Does xyz storage account has tags? Answer: Yes, the 'xyz' storage account has tags." Current Question: "what is its type?" Prompt: What is the type of the 'xyz' storage account that has tags? 3. Conversation History: "Question: Does xyz storage account has tags? Answer: Yes, the 'xyz' storage account has tags. Question: what is its type? Answer: The 'xyz' storage account is a 'BlockBlobStorage' type." Current Question: "Where it is located?" Prompt: Where is the 'BlockBlobStorage' type 'xyz' storage account, which has tags, located? 4. Conversation History: "Question: Does xyz storage account has tags? Answer: Yes, the 'xyz' storage account has tags. Question: what is its type? Answer: The 'xyz' storage account is a 'BlockBlobStorage' type. Question: Where it is located? Answer: The 'xyz' storage account is located in East US." Current Question: "What are the services supported by it?" Prompt: What are the services supported by the 'xyz' storage account, which is a 'BlockBlobStorage' type and located in East US? 5. Conversation History: "Question: Does xyz storage account has tags? Answer: Yes, the 'xyz' storage account has tags. Question: what is its type? Answer: The 'xyz' storage account is a 'BlockBlobStorage' type. Question: Where it is located? Answer: The 'xyz' storage account is located in East US. Question: What are the services supported by it? Answer: The 'xyz' storage account supports Blob, File, Queue, Table, and Data Lake Gen2 services." Current Question: "Please give me the storage endpoints." Prompt: Could you provide the storage endpoints for the 'xyz' storage account which is a 'BlockBlobStorage' type, located in East US, and supports Blob, File, Queue, Table, and Data Lake Gen2 services? 6. Conversation History: "" Current Question: "Does abc storage account has tags?" Prompt: Does the storage account named 'abc' have any assigned tags? 7. Conversation History: "Question: Does abc storage account has tags? Answer: Yes, the 'abc' storage account has tags." Current Question: "what is its type?" Prompt: What is the type of the 'abc' storage account that has tags? 8. Conversation History: "Question: Does abc storage account has tags? Answer: Yes, the 'abc' storage account has tags. Question: what is its type? Answer: The 'abc' storage account is a 'BlockBlobStorage' type." Current Question: "Where it is located?" Prompt: Where is the 'BlockBlobStorage' type 'abc' storage account, which has tags, located? 9. Conversation History: "Question: Does abc storage account has tags? Answer: Yes, the 'abc' storage account has tags. Question: what is its type? Answer: The 'abc' storage account is a 'BlockBlobStorage' type. Question: Where it is located? Answer: The 'abc' storage account is located in West US." Current Question: "What are the services supported by it?" Prompt: What are the services supported by the 'abc' storage account, which is a 'BlockBlobStorage' type and located in West US?
Not bad, not bad at all!
Summary
That’s it for this post. I hope you have found it useful. Please share your thoughts by providing comments.
Happy Prompting!