Posted in 2023
Prompting Patterns: The Clarification Pattern - Kasper Junge
- 02 November 2023
The more I use ChatGPT and develop software using LLM APIs, the more I realize that context is essential for LLMs to provide high-quality answers. When I use ChatGPT and receive unsatisfactory responses, it’s typically due to a lack of information about the problem I’m presenting or my current situation. I often notice that I might be ambiguous about the task I want ChatGPT to address, or ChatGPT perceives the issue in a manner I hadn’t anticipated. However, I’ve observed that by adopting a simple pattern, I can significantly reduce these challenges, consistently leading to more accurate responses.
Text Classifiers are an Underrated Application of LLMs
- 12 September 2023
Before LLMs really became a thing, getting up and running with a text classifier for a non-standard problem from scratch, including the annotation of a dataset for training, would probably take at least 3 weeks of work hours. That amounts to 7,200 minutes. Today, getting up and running with a classifier using LLMs requires only writing a prompt, which takes about a minute.
A Process for Building LLM Classifiers
- 17 August 2023
Large language models (LLMs) can be prompt-engineered to solve a wide variety of tasks. While many consider chat as the primary use case, LLMs can also be used to build traditional classifiers.