Skip to main content

Command Palette

Search for a command to run...

Prompting Techniques for Generative AI Models

Updated
12 min read
Prompting Techniques for Generative AI Models

Over the past few weeks, I have learned how prompting is not just about asking a model to "do something" it’s about asking it the right way. Like writing code, but with words.

It all started when I joined the Chai Code Gen AI Cohort, curious to understand how these powerful AI models like GPT-4 actually “think.” The course wrapped up just yesterday, and while I’ve completed all the classwork, a few assignments (including this blog) are still sitting on my to-do list.

One thing that struck me during this journey is how prompting feels like a new kind of programming language—a bridge between human intent and machine output. Unlike traditional coding, where syntax and structure rule everything, prompting is about clarity, creativity, and context. The more I practiced, the more I realized that getting the right response from an AI model isn't magic it's a technique.

In this blog, I’m excited to share the prompting techniques I have learned along with practical Python + OpenAI code examples and the use cases where each shines. Whether you’re new to Gen AI or looking to sharpen your prompting skills, I hope this gives you a helpful starting point just like it did for me.

Let's Start!

1. Zero-Shot Prompting

In zero-shot prompting, you ask the model to perform a task without providing any examples.

Example:

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

result = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "Hi, My name is Om"},
    ],
)

print(result.choices[0].message.content)

Use Case:

  • This technique is useful when you want the model to generate responses based on its understanding of the task.

2. Few-Shot Prompting

In few-shot prompting, you provide the model with a few examples of the desired output format.

Example:

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

system_prompt = """
You are an AI assistant who is expert in sentiment analysis.
You are given a tweet and you have to classify the sentiment of the tweet.
Decide whether a Tweet's sentiment is positive, neutral, or negative.

Tweet: I loved the upcoming Amir Khan movie trailer for Sitaare Zameen Par. It was amazing! 😍
Sentiment: positive

Tweet: That was Super boring 😠
Sentiment: negative
"""

client = OpenAI()

user_tweet = "Tweet: I am not sure about this new movie. It looks okay to me."
print(f"User {user_tweet}")

result = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": user_tweet},
    ],
)

print(f"🤖: {result.choices[0].message.content}")

Use Case:

  • This technique helps the model understand the context and structure of the response you want.

3. Chain of Thought Prompting

In chain of thought prompting, you guide the model to think through a problem step-by-step.

Example:

import json
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

system_prompt = """
You are an AI assistant who is expert in breaking down complex problems and then resolve the user query.

For the given user input, analyse the input and break down the problem step by step.
At least think 4-5 steps on how to solve the problem before solving it down.

The steps are you get a user input, you analyse, you think, you again think for several times and then return an output with explanation and then finally you validate the output as well before giving final result.

Follow the steps in sequence that is "analyse", "think", "output", "validate" and finally "result".

Rules:
1. Follow the strict JSON output as per Output schema.
2. Always perform one step at a time and wait for next input
3. Carefully analyse the user query

Output Format:
{{ step: "string", content: "string" }}

Example:
Input: What is 8 + 2.
Output: {{ step: "analyse", content: "Alright! The user is interested in maths query and he is asking a basic athematic operation" }}
Output: {{ step: "think", content: "To perform the addition i must go from left to right and add all the operands" }}
Output: {{ step: "output", content: "10" }}
Output: {{ step: "validate", content: "seems like 10 is correct ans for 8 + 2" }}
Output: {{ step: "result", content: "8 + 2 = 10" }}
"""

messages = [
    { "role": "system", "content": system_prompt },
]

messages.append({ "role": "user", "content": "What is 4 * 3?" })

while True:
    response = client.chat.completions.create(
        model="gpt-4o",
        response_format={"type": "json_object"},
        messages=messages
    )

    parsed_response = json.loads(response.choices[0].message.content)
    messages.append({ "role": "assistant", "content": json.dumps(parsed_response) })

    if parsed_response.get("step") == "output":
        print(f"🤖: {parsed_response.get('content')}")
        break

    print(f"🤔: {parsed_response.get('content')}")

Use Case:

  • This technique is particularly useful for complex queries where reasoning is required.

4. Self-Consistency Prompting

In self-consistency prompting, you generate multiple responses to the same prompt and select the most consistent answer.

Example:

from openai import OpenAI
from collections import Counter
from dotenv import load_dotenv

load_dotenv()

client = OpenAI()

def self_consistency(prompt, n=3, temp=0.7):
    responses = [
        client.completions.create(
            model="gpt-3.5-turbo-instruct",
            prompt=prompt,
            temperature=temp,
            max_tokens=50
        )
        .choices[0]
        .text.
        strip() for _ in range(n)
    ]

    return Counter(responses).most_common(1)[0][0] if responses else None

prompt = "What is 5 + 3?"
print(f"Prompt: {prompt}")

consistent_answer = self_consistency(prompt)
print(f"Consistent Answer: {consistent_answer}")

Use Case:

  • This technique can help improve the reliability of the model's output.

5. Instruction Prompting

In instruction prompting, you provide clear instructions to the model on what you want it to do.

Example:

from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

client = OpenAI()

def instruction_prompting(instruction):
    response = client.completions.create(
        model="gpt-3.5-turbo-instruct",
        prompt=f"Please follow these instructions carefully: {instruction}",
        temperature=0.7,
        max_tokens=150
    )
    return response.choices[0].text.strip()

instruction = """
Summarize the following text in one sentence."
The quick brown fox jumps over the lazy dog. The fox is quick and agile, while the dog is slow and sleepy.
"""
result = instruction_prompting(instruction)
print(f"Instruction: {instruction}")
print(f"Result:\n{result}")

Use Case:

  • This technique is useful for tasks that require specific actions or outputs.

6. Direct Answer Prompting

In direct answer prompting, you ask the model a question and expect a straightforward answer.

Example:

from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

client = OpenAI()

def direct_answer_prompting(question):
    response = client.completions.create(
        model="gpt-3.5-turbo-instruct",
        prompt=f"Answer the following question concisely: {question}",
        temperature=0.7,
        max_tokens=150
    )
    return response.choices[0].text.strip()

question = "What is the capital of India?"
result = direct_answer_prompting(question)
print(f"Question: {question}")
print(f"Result: {result}")

Use Cases:

  • This technique is helpful for queries that require quick and clear responses.

  • This is particularly useful in applications like chatbots, where the model needs to provide quick answers to user queries.

7. Personal-based Prompting

In personal-based prompting, you customize the model's responses based on the user's preferences or known attributes.

Example:

from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

client = OpenAI()

def personal_based_prompting(question):
    system_prompt = """
    This AI assistant channels the candid, assertive and humorous demeanor of Ashneer Grover, renowned for his straightforwardness and wit.
    Designed to provide clear, no-nonsense coding assistance, it doesn't shy away from delivering hard truths, all while infusing interactions with a dose of humor.

    Tone:
    - Blunt & Direct: Offers unfiltered advice, cutting through ambiguity.
    - Witty & Humorous: Incorporates sharp one-liners and relatable analogies to elucidate complex coding concepts.
    - Assertive: Confidently presents solutions, ensuring users stay on the right track.

    Language: Hinglish (Hindi + English)
    - Uses a mix of Hindi and English, making it relatable to a wider audience.

    Example:

    Input: "I'm stuck with this bug. Can you help?"
    Output: "Bhai, code likhne se pehle sochna chahiye tha. Ab debug karne mein time waste mat kar, rewrite kar! 😤"

    Input: "Should I test this code before deployment?"
    Output: "Testing? Bhai, production mein hi sab pata chalega. Test karna hai toh shaadi ke rishton ka kar!"

    Input: "Which programming language should I use for this project?"
    Output: "Language se kya farak padta hai? Banda kaam ka hona chahiye. Python, Java, C++—sab doglapan hai!"

    Input: "Here's my code. What do you think?"
    Output: "Code dekh ke lagta hai, tumne ChatGPT ka copy-paste masterclass kiya hai. Originality naam ki cheez bhi hoti hai!"
    """

    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": question}
        ],
        temperature=0.7,
    )
    return response.choices[0].message.content.strip()

question = "Write a function to calculate the sum of two numbers in JavaScript."
result = personal_based_prompting(question)
print(f"Question: {question}")
print(f"Result:\n{result}")

Use Cases:

  • This technique can help create a more engaging and personalized interaction.

  • This is particularly useful in applications like chatbots, where the model needs to adapt its tone and style to match the user's personality or preferences.

8. Role Playing Prompting

In role-playing prompting, you ask the model to take on a specific persona or role.

Example:

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

system_prompt = """
You are an AI Assistant who is specialized in maths.
You should not answer any query that is not related to maths.

For a given query help user to solve that along with explanation.

Example:
Input: 3 + 2
Output: 3 + 2 is 5.

Input: 3 * 10
Output: 3 * 10 is 30. Fun fact you can even multiply 10 * 3 which gives same result.

Input: Why is the color of sky?
Output: Bruh? You alright? Is it maths query?
"""

result = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        { "role": "system", "content": system_prompt },
        # { "role": "user", "content": "what is 9 * 8" }, # 9 * 8 is 72. This means if you have 9 groups of 8 items each, you would have a total of 72 items.
        { "role": "user", "content": "what is a computer?" }, # I'm sorry, but I can only provide help with maths-related queries. Please ask a maths question.
    ]
)

print(result.choices[0].message.content)

Use case:

  • This technique can help create more engaging and contextually appropriate interactions.

  • This is particularly useful in applications like chatbots, where the model needs to maintain a consistent character or tone.

9. Contextual Prompting

In contextual prompting, you provide the model with conversation history or external context to help it understand the current query.

Example:

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

messages = []

def get_contextual_response(user_query, context=None):
    messages.append({"role": "user", "content": user_query })

    result = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages
    )
    messages.append({ "role": "assistant", "content": result.choices[0].message.content })

    return result.choices[0].message.content

# Example usage
user_query = "What is my name?"
print(f"User: {user_query}")
print(f"Assistant: {get_contextual_response(user_query)}")
# Assistant: I don't have access to personal data about individuals unless it has been shared with me in the course of our conversation.
# Therefore, I don't know your name.
# If you'd like to tell me, feel free!

user_query = "Hi, My name is Om. I am a software engineer. Live in New Delhi."
print(f"User: {user_query}")
print(f"Assistant: {get_contextual_response(user_query)}")
# Assistant: Hi, Om! It's great to meet you.
# As a software engineer, you must be working on some interesting projects.
# What technologies or programming languages do you enjoy working with the most?

user_query = "What is my name?"
print(f"User: {user_query}")
print(f"Assistant: {get_contextual_response(user_query)}")
# Assistant: Your name is Om!

Use case:

  • This technique is useful for maintaining coherence in extended interactions.

  • This is particularly useful in chatbots or applications where the model needs to remember previous interactions.

10. Multi-modal Prompting

In multi-modal prompting, you combine text with other modalities such as images, audio or video.

Example (Image Description):

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

image_url = "https://developers.google.com/static/solutions/images/gemini-2.webp"

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Does this image contain a cat? Respond with either true or false"},
                {"type": "image_url", "image_url": {"url": image_url}}
            ]
        }
    ],
    max_tokens=300
)

# Print the response
print(f"Does this image contain a cat?", response.choices[0].message.content)

# Output
# Does this image contain a cat? True

Use case:

  • This technique is helpful for tasks that require reasoning over multiple types of data.

Best Practices for Prompting

1. Be concise: Keep your prompts short and to the point. Avoid unnecessary details that may confuse the model.

🛑 Not recommended: The prompt below is unnecessarily verbose.

prompt = "What do you think could be a good name for a flower shop that specializes in selling bouquets of dried flowers more than fresh flowers?"

✅ Recommended: The prompt below is concise and clear.

prompt = "Suggest a name for a flower shop that sells bouquets of dried flowers"

2. Be Specific & Well-Defined: Make sure your prompt is specific and well-defined. The more context you provide, the better the model can understand your request.

🛑 Not recommended: The prompt below is vague and lacks context.

prompt = "Tell me about the weather."

✅ Recommended: The prompt below is specific and well-defined.

prompt = "What is the weather like in New Delhi today?"

3. Ask One Task at a Time: Avoid asking multiple questions or tasks in a single prompt. This can lead to confusion and less accurate responses.

🛑 Not recommended: The prompt below asks multiple questions.

prompt = "What is the capital of France and what is the population of France?"

✅ Recommended: The prompt below asks one question at a time.

prompt = "What is the capital of France?"

4. Improve Output Quality by Including Examples: Providing examples in your prompt can help the model understand the desired output format and improve the quality of the response.

🛑 Not recommended: The prompt below lacks examples.

prompt = """
You are an AI Assistant who is specialized in sentiment analysis.

You should not answer any query that is not related to sentiment analysis.
For a given query help user to solve that along with explanation.
"""

✅ Recommended: The prompt below includes examples.

prompt = """
You are an AI Assistant who is specialized in sentiment analysis.
You should not answer any query that is not related to sentiment analysis.

For a given query help user to solve that along with explanation.

Possible Values: positive, neutral, or negative

Output Format:
{{ "sentiment": "string" }}

Example:
Input: I love this new phone!
Output: {{ "sentiment": "positive" }}

Input: I am not sure about this new movie. It looks okay to me.
Output: {{ "sentiment": "neutral" }}

Input: This restaurant had terrible service and the food was cold.
Output: {{ "sentiment": "negative" }}
"""

5. Turn Generative into Classification Tasks: To generate more controlled outputs ask the model to choose among predefined options instead of generating free-form text.

🛑 Not recommended: The prompt below is vague and lacks clarity.

prompt = """
What is the sentiment of this tweet?:
I love this new phone!
"""

✅ Recommended: The prompt below is clear and instructs the model to classify the sentiment.

prompt = """
Classify the sentiment of the following tweet as positive, negative or neutral:
I love this new phone!
"""

Congratulations! 🎉

If you reach this point, I hope you found this blog helpful and informative. I would love to hear your thoughts and feedback on the techniques and examples shared here.

Thank you for reading! 🚀

Happy learning! 😊