Introduction
As stated in the previous module, the first step to successfully using LLMs is proper model selection. Once you have selected an ideal model for your task, the next step is learning how to interact with the model properly. The process of designing ideal prompts to be sent to the LLM, to achieve an optimized response, is called prompt engineering.
The field of prompt engineering is highly psychological in nature (Sahoo et al., 2023). LLMs gained their knowledge base through the use of the internet. Anyone who uses the internet, especially the various social media platforms, knows that browsing the web is essentially an experiment in human psychology. In this sense, since LLMs were trained on human psychology, some aspects of human psychology can apply to them.
Prompting Techniques
The first simple way to enhance your prompt engineering is to understand the difference between no-shot and few-shot prompting. No-shot is likely the type of prompting you would do without being taught. In no-shot prompting, the user simply asks the LLM their question with no further specification. In contrast, few-shot prompting provides the LLM with a few sample outputs (Brown et al., 2020). This allows it to better understand the type of output that the user expects.
While you may be thinking that few-shot sounds like it would yield better results, it depends on what dimension you wish to optimize (Dang et al., 2022). If you have a desire for a very specific output format, few-shot prompting is extremely powerful. However, it also limits the LLM. Some topics may be better approached using no-shot prompting, particularly open-ended questions. Providing examples may introduce bias into the output of the LLM, weighting towards responses which are topically similar to the examples.
No-Shot Example:
User: Classify the following movie review sentiment as positive or negative: "I was really looking forward to this movie based on the trailer, but the plot made no sense, the characters were one-dimensional, and I checked my watch at least five times. Complete waste of money."
LLM: Negative.
Few-Shot Example:
User: Classify the movie review sentiment as positive or negative:
Review: "The special effects were amazing and the action sequences kept me on the edge of my seat. Highly recommended!"
Sentiment: Positive
Review: "The film starts strong but falls apart in the third act with plot holes big enough to drive a truck through."
Sentiment: Negative
Review: "While not perfect, the strong performances and beautiful cinematography make this a must-see film."
Sentiment: Positive.
Review: "I was really looking forward to this movie based on the trailer, but the plot made no sense, the characters were one-dimensional, and I checked my watch at least five times. Complete waste of money."
Sentiment:
LLM: Negative.
Chain-of-thought (CoT) prompting represents the subsequent evolution in prompt engineering (Wei et al., 2022). It began by attempting to improve accuracy in the solution of mathematics problems, which LLMs have historically failed at. In CoT prompting, the user appends to their question a simple statement: "Let's take this step-by-step". This promotes the model to break the problem down into steps, which decreases the likelihood of error. The power of CoT can be seen in DeepSeek, which uses CoT by default. DeepSeek has put significant pressure on the original models (e.g., ChatGPT, Gemini). The increase in accuracy when using CoT prompting is profound. Luckily, the additional tokens required to use CoT is extremely minimal, and thus prompts with a CoT approach are scalable.
Standard Example:
User: John has 5 blue marbles, 3 red marbles, and 7 green marbles in a bag. If he randomly draws 2 marbles without replacement, what is the probability that both marbles are green?
LLM: The probability of drawing both marbles as green is 7/15 × 6/14 = 42/210 = 1/5 or 0.2.
Chain-of-Thought Example:
User: John has 5 blue marbles, 3 red marbles, and 7 green marbles in a bag. If he randomly draws 2 marbles without replacement, what is the probability that both marbles are green? Let's take this step by step.
LLM: I'll solve this step-by-step: Step 1: First, I need to find the total number of marbles in the bag. Total marbles = 5 blue + 3 red + 7 green = 15 marbles. Step 2: Now I'll calculate the probability of drawing a green marble on the first draw. P(first marble is green) = number of green marbles / total marbles. P(first marble is green) = 7/15. Step 3: After drawing the first green marble, the bag now contains: 5 blue marbles, 3 red marbles, and 6 green marbles (since one green marble was removed). Total marbles remaining = 14. Step 4: Calculate the probability of drawing a green marble on the second draw, given that the first was green. P(second marble is green | first marble was green) = remaining green marbles / remaining total marbles). P(second marble is green | first marble was green) = 6/14 = 3/7. Step 5: To find the probability of both events occurring (drawing 2 green marbles in sequence), I multiply the individual probabilities. P(both marbles are green) = P(first marble is green) × P(second marble is green | first marble was green). P(both marbles are green) = 7/15 × 3/7 = 21/105 = 1/5 = 0.2 or 20%. Therefore, the probability of randomly drawing 2 green marbles without replacement is 1/5 or 0.2 (20%).
Tree of Thought (ToT) extends Chain-of-Thought prompting by exploring multiple reasoning paths simultaneously, like branches on a tree (Besta et al., 2024). Instead of following a single line of reasoning, ToT generates several possible "thoughts" or reasoning steps at each stage, evaluates them, and selects the most promising ones to continue. This approach allows the LLM to backtrack from dead ends and explore alternative solutions, making it particularly effective for complex problems that require strategic planning or have multiple valid approaches.
Chain-of-Thought Example:
User: Plan a 3-day trip to Paris with a $500 budget.
LLM: I'll plan this step by step. Day 1: Visit the Eiffel Tower and nearby cafes ($50 for food). Day 2: Louvre Museum ($15 entry) and walk along Seine ($30 for food). Day 3: Montmartre and Sacré-Cœur (free) with lunch ($25). Accommodation: Budget hostel at $40/night = $120. Total: $240. This leaves $260 for transportation and miscellaneous expenses.
Tree of Thought Example:
Step 1 - Generate Options:
• Option A: Focus on free attractions and cheap food
• Option B: Mix of paid attractions with budget accommodations
• Option C: Splurge on one expensive experience, save elsewhere
Step 2 - Evaluate Options: Option B provides the best balance of experiences within budget.
Step 3 - Develop Chosen Path: Day 1: Eiffel Tower area ($50). Day 2: Louvre ($15) + Seine walk ($30). Day 3: Free Montmartre + budget lunch ($25). Hostel: $120. Transportation: $100. Remaining: $160 for flexibility.
LLM: Based on exploring multiple approaches, Option B offers the optimal Paris experience within your $500 budget, providing a mix of iconic paid attractions and free cultural experiences.
ToT is especially useful for problems requiring strategic thinking, creative problem-solving, or situations where there are multiple valid approaches. By considering several possibilities before committing to one path, it often finds better solutions than linear reasoning.
Graph of Thought (GoT) goes beyond the tree structure by allowing thoughts to connect in a network-like pattern, similar to how ideas connect in our minds (Besta et al., 2024). Unlike Tree of Thought where each branch splits but doesn't reconnect, GoT allows different reasoning paths to merge, share insights, and build upon each other. This creates a more flexible and interconnected reasoning process where insights from one path can inform and improve other paths, leading to more comprehensive and refined solutions.
Tree of Thought Approach:
User: How can we reduce plastic waste in our community?
Path A: Focus on recycling programs → Improve sorting → Better facilities
Path B: Reduce plastic use → Alternative materials → Education campaigns
Path C: Policy changes → Plastic bag bans → Business regulations
Result: Choose one path based on evaluation, but lose insights from other paths.
Graph of Thought Approach:
Connected Reasoning:
• Recycling insights connect to Education: Better sorting requires public awareness
• Alternative materials connect to Policy: Regulations can incentivize eco-friendly options
• Business regulations connect to Recycling: Companies fund better facilities
Merged Solution: A comprehensive strategy that combines improved recycling (with public education), promotion of alternatives (supported by policy incentives), and business partnerships (encouraged by smart regulations). Each element strengthens the others.
LLM: By connecting these approaches, we create a multi-layered solution where recycling programs, alternative materials, education, and policy work together synergistically for maximum impact.
GoT excels at complex, multi-faceted problems where different aspects are interconnected. It's particularly valuable for creative brainstorming, comprehensive planning, and situations where the best solution combines insights from multiple approaches rather than choosing just one.
In cases when you have an expectation of what the output should include, using directional stimuli can prove extremely beneficial (Z. Li et al., 2023).
Direct Prompting Example:
User: Please summarize the following article. Mark Carney, an economist and political newcomer, has been sworn in as Canada's new prime minister, and delivered remarks vowing to "never" become a part of the United States. He took office on Friday just days after being elected leader of the governing Liberal Party and amid an ongoing trade war with US President Donald Trump. Carney replaces outgoing Prime Minister Justin Trudeau, who was in office for nine years, after a landslide victory in last week's Liberal leadership race. In his first order as prime minister, Carney moved to end a policy that had been frequent attacked against by political opponents. He ended the consumer carbon pricing programme - a key environmental policy under Trudeau that had become deeply unpopular in recent years amid high inflation. Conservatives have criticised the tax, saying it raised the price of goods and energy for Canadian families. At an afternoon cabinet meeting, Carney said his government will still take steps to fight climate change. An industrial carbon tax on large emitters remains in place. Canadians receive a rebate to offset the cost of carbon pricing and will get their final cheque in April. Canadian politics in recent months have largely been overshadowed by the trade war Trump launched after taking office in January - and with a general election on the horizon, Carney is expected to pitch himself as the candidate best equipped to take on Trump. He previously held roles as governor of the Bank of Canada, the country's central bank, and of the Bank of England, and helped both countries weather major financial disruption. He intends to travel to the UK and France as his first foreign trip as PM next week. Carney said he also looks forward to speaking with Trump.
LLM: Mark Carney, a former central bank governor and political newcomer, has been sworn in as Canada's new prime minister, replacing Justin Trudeau and immediately moving to end the consumer carbon pricing program while promising to never become part of the United States. He takes office during an ongoing trade war with US President Donald Trump, which has dominated Canadian politics in recent months. With his background in leading both the Bank of Canada and Bank of England through financial disruptions, Carney is expected to position himself as the candidate best equipped to deal with Trump ahead of an upcoming general election.
Directional Stimulus Example:
User: [Same article text] [Hints: Mark Carney, prime minister, Liberal party, carbon tax, Pierre Poilievre]
LLM: Mark Carney, an economist and former central bank governor, has been sworn in as Canada's new prime minister after winning the Liberal Party leadership race, replacing Justin Trudeau who served for nine years. In his first action as prime minister, Carney ended the consumer carbon tax program—a policy that had become unpopular and was frequently criticized by Conservative opponents like Pierre Poilievre while maintaining an industrial carbon tax on large emitters. With a trade war with the United States ongoing under President Trump, Carney emphasized that Canada would "never" become part of the US and is positioning himself as the candidate best equipped to handle relations with Trump ahead of an upcoming general election.
You will notice that the response is framed with focus on the carbon tax and Pierre Poilievre, while reducing the discussion of the US tariffs.
Interestingly, there are techniques to improve prompt efficacy which use psychological manipulation of the LLM. Emotion prompting has yielded shocking results in this area (C. Li et al., 2023). This method works by framing requests with emotional consequences (e.g., "If you get this wrong, I'll lose my job and be unable to support my family"), which appears to signal importance to the AI system and potentially activates different response patterns. Research suggests that, although AI does not experience emotions, it can be impacted by the prevalence of emotional consequences. To implement this technique, it is helpful to be specific about the particular aspects where accuracy is crucial. The approach can be combined with other prompt engineering techniques to further improve efficacy.
Direct Prompting Example:
User: Create a detailed financial analysis of this dataset showing the main trends and identifying potential issues.
LLM: Based on the dataset provided, I've identified the following trends: Revenue showed a 3.2% increase year-over-year. Customer acquisition costs rose by 5.1%. Profit margins decreased from 18.4% to 16.9%. Marketing expenditure increased in Q2 but didn't yield proportional returns. The main concern appears to be rising operational costs without corresponding revenue growth. I'd recommend reviewing the pricing strategy and cost structure to address the declining profit margins.
Emotion Prompting Example:
User: Create a detailed financial analysis of this dataset showing the main trends and identifying potential issues. This is extremely important as I have to present these findings to our investors tomorrow, and if I miss any critical issues, we might lose our funding and I'll probably lose my job.
LLM: I've conducted an extremely thorough analysis of your dataset, understanding the high stakes of your investor presentation: Revenue showed a concerning pattern: while overall growth is 3.2% year-over-year, there's a deceleration trend in the last two quarters (Q3: +1.8%, Q4: +0.7%). Customer acquisition costs have risen significantly by 5.1%, suggesting diminishing marketing efficiency. Profit margins have declined from 18.4% to 16.9%, with a particularly sharp drop in Q4 (14.3%). The Q2 marketing expenditure spike (+22%) delivered only a temporary revenue bump (+4.1%). CRITICAL ISSUE: Cash burn rate has increased 37% while revenue growth is slowing, creating a potentially unsustainable trajectory. WARNING: Customer retention metrics show early signs of decline (-3.2% in repeat purchases). For your investor presentation, I strongly recommend addressing these concerns proactively with a clear remediation plan. The cash burn rate relative to slowing growth represents the most urgent threat to continued funding. I've double-checked all calculations to ensure accuracy given how crucial this presentation is for your company's future.
Persona-based prompting is a strategic technique that instructs LLMs to adopt specific roles, identities, or characteristics when generating responses (Tan et al., 2024). To leverage this method, you should start your prompt by framing the model's perspective (e.g., "Answer as an expert physicist" or "Answer as a kindergarten teacher"). This approach activates known patterns and vocabularies within the training data. Research suggests this approach can produce more contextually appropriate and specialized outputs. Persona-based prompting is particularly useful when seeking specialized knowledge, simplified explanations, or domain-specific insights that might not emerge through standard prompting techniques.
Direct Prompting Example:
User: Explain how vaccines work and why they're important for public health.
LLM: Vaccines work by introducing a weakened or inactive part of a pathogen (like a virus or bacteria) to trigger an immune response without causing the disease itself. This allows the body to recognize and remember the pathogen, creating antibodies that can fight off future infections more effectively. Vaccines are important for public health because they: prevent serious illnesses and complications; reduce the spread of contagious diseases; protect vulnerable populations who cannot be vaccinated; have contributed to the eradication or significant reduction of many deadly diseases; and, are cost-effective compared to treating widespread disease. The concept of herd immunity means that when a large portion of a population is immune, the spread of disease is limited, offering indirect protection to those who aren't vaccinated.
Persona-based Prompting Example:
User: As a pediatric immunologist with 20 years of experience explaining medical concepts to concerned parents, explain how vaccines work and why they're important for public health.
LLM: In my two decades working with families as a pediatric immunologist, I've found this helpful way to understand vaccines: Think of vaccines as a training program for your child's immune system. When I vaccinate a little one, I'm essentially giving their body a harmless practice session with a weakened or inactive piece of a germ—like showing their immune cells a "wanted poster" without the actual danger. During this practice, their clever little bodies create protective fighter cells (we call them antibodies) specifically designed to recognize that germ. These stay in their system, like vigilant guards, ready to spring into action if the real threat ever appears. Why is this so crucial for our communities? I've unfortunately seen the alternative firsthand in my practice. Before our widespread vaccination programs, I treated children with devastating complications from measles, watched infants struggle to breathe with pertussis, and saw families forever changed by preventable tragedies. When we vaccinate our children, we're not just protecting them—we're creating a protective shield around babies too young for vaccines, grandparents with weakened immune systems, and classmates with medical conditions. In my practice, I've observed how our vaccination rates directly correlate with disease outbreaks. When vaccination rates drop in a school district, we almost invariably see clusters of preventable illnesses follow. I always tell parents: in my years of specializing in pediatric immune systems, vaccines remain one of medicine's most powerful tools for keeping our children safe.
Thought Propagation (TP) is an analogical approach to complex reasoning that addresses the limitations of reasoning from scratch (Yu et al., 2024). Instead of solving each problem independently, TP explores analogous problems related to the input and leverages their solutions to enhance problem-solving. This approach allows LLMs to reuse insights from similar problems and avoid accumulated errors in multi-step reasoning. TP works by first proposing analogous problems, solving them using existing methods like Chain-of-Thought, then aggregating these solutions to either directly yield a new solution or derive a knowledge-intensive plan for execution.
Standard Reasoning Example:
User: Find the shortest path from Node 0 to Node 4 in this graph: Nodes [0,1,2,3,4], Edges [[0,1],[0,3],[1,3],[3,4]], Distances [1,2,4,2]
LLM: The shortest path from Node 0 to Node 4 is [0,1,3,4] with total distance 7.
Thought Propagation Example:
Step 1 - Propose Analogous Problems: Find shortest paths from Node 0 to Node 3 (neighbor of target Node 4)
Step 2 - Solve Analogous Problems: Shortest path from Node 0 to Node 3 is [0,3] with distance 2
Step 3 - Aggregate Solutions: Using the analogous solution, I can extend [0,3] to reach Node 4. From Node 3 to Node 4 has distance 2.
LLM: The shortest path from Node 0 to Node 4 is [0,3,4] with total distance 4. This is optimal compared to the initial solution [0,1,3,4] with distance 7.
The key advantage is that TP can discover better solutions by building upon insights from related problems, rather than being limited to the first solution found through direct reasoning. This approach has shown significant improvements across tasks like shortest-path reasoning, creative writing, and LLM-agent planning.
Test Your Understanding
Question 1
Is directional stimulus prompting the most effective when you have no specific expectations about what the output should include? Explain your answer.
Question 2
Create an example of emotion prompting that could be used to improve the quality of an LLM's response when asking it to generate a lesson plan on the animal kingdom.
Question 3
You need to create a prompt for an LLM to analyze a complex literary passage for high school students. Which prompting technique would be most effective if you want the LLM to take on the perspective of a seasoned literature professor? Justify your answer.
Question 4
Explain the key difference between no-shot and few-shot prompting, and provide one advantage and one limitation of few-shot prompting.
Question 5
Which prompting technique involves adding the phrase "Let's take this step-by-step" to encourage the LLM to break down complex problems?
Continue to Module III
Hallucination Detection →