Previously, tuning text classifiers required annotated datasets. This process entailed splitting the datasets into training and test sets, fine-tuning the model, and measuring its performance. Often, improving accuracy meant analyzing incorrect predictions to hypothesize about what the model failed to understand about the problem. Solutions for improving performance could involve adding more annotations, tweaking the annotation protocol, or adjusting preprocessing steps.
However, with the rise of Large Language Models (LLMs), the focus has shifted towards crafting effective prompts rather than constructing datasets. If a model doesn't respond accurately to a prompt, it is fine-tuned by adjusting the prompt to accommodate potential misunderstandings. A significant advantage of LLMs is their ability to explain the reasoning behind their predictions. This interactive approach allows users to probe the model's understanding and further refine prompts. Moreover, these models can express their thought processes, not only enhancing their performance but also introducing a technique that can be termed "Thought Debugging". This allows for the diagnosis and correction of their cognitive processes.