Quick Prompt Engineering Tip 1 - Chain Your Prompts! 🔄
The saying goes, “Writing is rewriting.” The same applies to LLMs! Just like how we ask humans to double-check their work, we can prompt LLMs to review and improve their responses.
Here’s a simple example:
- First prompt: “List 10 words ending in ‘ab’”
- Chain prompt: “Now check if each word is valid. Show your analysis and replace any invalid ones.”
This simple chaining technique can lead to improved results. The key is asking the model to:
- Show its reasoning
- Break down its analysis
- Replace incorrect answers
Important caveat: Like any prompt engineering technique, results vary based on the task complexity and model capability. I tested this with base-tier models across OpenAI, Anthropic, and Google - while the improvement wasn’t dramatic, the models were able to identify errors from their initial responses without introducing new ones during the chaining step.
Always test your prompting strategies! Check out my experiment code here: https://github.com/limyewjin/llm-tutorial-chaining
Enjoy Reading This Article?
Here are some more articles you might like to read next: