The more I use ChatGPT and develop software using LLM APIs, the more I realize that context is essential for LLMs to provide high-quality answers. When I use ChatGPT and receive unsatisfactory answers, it's typically due to a lack of information about the problem I'm presenting or my current situation. I often notice that I might be ambiguous about the task I want ChatGPT to solve, or ChatGPT perceives the issue in a manner I hadn't anticipated. However, I've observed that by adopting a simple pattern, I can significantly reduce these challenges, consistently leading to more accurate responses.
The pattern is as follows:
- Me: I instruct ChatGPT to perform a task. I tell it not to respond immediately but to ask clarifying questions if any aspect of my instruction is unclear.
- ChatGPT: Asks clarifying questions.
- Me: I answer the questions and tell it again not to execute the instruction but to ask further clarifying questions if any part of my answers is unclear.
- ChatGPT: It does one of two things.
a) Asks additional clarifying questions. If this happens, return to step 3.
b) Indicates it has no further questions. If this is the case, proceed to step 5. - Me: I give the command to execute the instruction.
I call this the "Clarification Pattern." Recognizing this approach shifted my perspective from viewing prompt engineering solely as individual prompts to thinking in terms of human-AI conversations. Through these dialogues, I can build valuable context by clarifying ambiguities in both my understanding and that of ChatGPT, thus providing ChatGPT with the optimal conditions to deliver an excellent response.