Anthropic, a startup founded by former OpenAI engineers, has made a significant breakthrough in chatbot technology. The company has expanded the contextual input window for its Claude chatbot to 75,000 words, a big improvement over current models. According to Anthropic, this expansion has allowed the chatbot to process an entire fictional novel in less than a minute.
Anthropic has tested the system by editing one sentence in The Great Gatsby and asking the AI to notice the change. It happened in just 22 seconds.
One of the major limitations of chatbots is memory, as the amount these systems can process during use is quite limited. While AI language models that underpin these systems learn from terabytes of text, their ability to process during use is still limited to a few thousand words. For example, ChatGPT can only process up to 3,000 words during use.
The new Claude features are only available to Anthropic’s business partners who connect to the chatbot via the company’s API. The price for the new features is currently unknown, but it is expected to have increased significantly due to the increased computational overhead required to process more text.
AI language models measure information not by the number of characters or words but by tokens — a semantic unit that does not always correlate with the number of characters. This advantage for the Claude context window is that it can handle up to 100,000 tokens compared to 9,000 previously. The fully functional GPT-4c model can only handle up to 32,000 tokens.
Anthropic’s breakthrough is significant for the chatbot industry, as it opens up new possibilities for more complex conversations with chatbots. The company’s success in expanding the contextual input window could lead to even more advancements in chatbot technology. However, it remains to be seen how this will affect the future pricing and availability of chatbots.