Most Software is Not Built to Understand Language

Most of the software we use today is not designed to understand language.

If an application allows you to input language, the purpose is often that it should either be understood by yourself or others.

Examples of text you provide to software that is meant to be understood by yourself:

  • Naming products in a product catalog
  • Notes
  • Personal tasks

Examples of text you provide to software that is meant to be understood by others:

  • Emails
  • Chat messages
  • Social media posts

And there is a good reason why most of today’s software is not designed to understand language.

Historically, it has been incredibly time-consuming, expensive, cumbersome, and has required specialized knowledge to develop software that understands language. Therefore, it has also been associated with high risk.

Historically, from the moment you come up with a language-based task that can be solved in your application, to the point where you have the first prototype, it typically takes 1-2 months for inexperienced teams.

This is due to the complicated process involved in developing AI models that can understand language.

The annotation process in particular, where humans manually review data and attach labels to each example, is a major bottleneck in the development of AI models and is typically monotonous and tedious to carry out.

In some cases, it has been possible to bypass this entire process by using open models that can solve general problems such as Named Entity Recognition or sentiment analysis. Alternatively, applications have been designed in such a way that users annotate the data.

However, for many tasks, it is not possible to design your application so that users can annotate data. Openly available models often solve very general problems, and the truly valuable language problems to be solved are often specific to the context of your own application, making open models useless.

In these cases, which I dare say are the majority, there is no way around the slow annotation process.

All of this is why most people avoid thinking about use cases for their software products where language understanding is part of the solution.

However, the situation has changed dramatically over the past two years.

The introduction of ChatGPT and the growing number of APIs that provide access to the underlying general Large Language Models (LLMs) has significantly changed this.

Today, we have models that are so general in their knowledge and interface that they are capable of solving problems they haven’t specifically been trained to solve, with impressive accuracy.

This has eliminated most of the work from the previously slow process of developing applications that can understand and meaningfully process language.

It is now possible to reach a prototype in five minutes simply by writing a prompt that instructs a general language model to solve the problem.

The cost and risk of getting started with language-understanding problem-solving in your software application have therefore almost disappeared.

Very few people I talk to think about the impact of LLMs in this way.

So far, the thinking about what problems software can solve has centered around data types that can be meaningfully processed with logic, largely motivated by the fact that it has been expensive to get started with language-understanding software.

However, much of the risk associated with integrating AI into software applications has disappeared, making it much cheaper to think of digital solutions that can understand and process language.