logo

Context length limit

As you may know, each chat model has a different context length limit:
  • GPT-3.5: 4,096 tokens
  • GPT-3.5-16k: 16,385 tokens
  • GPT-4: 8,192 tokens
  • GPT-4-32K: 32,768 tokens
  • Claude: 100,000 tokens
πŸ’‘
Learn more about tokens here
Image without caption

What happen when you reached context length limit on TypingMind?

When the limit of context length is reached in a TypingMind conversation:
  • An alert that says β€œContext length limit reached” will appear
  • And you can still continue chatting with the AI assistant. However, the system will automatically forget a specific number of earliest text messages to allow room for new content that you input.
πŸ’‘
Please note that the AI model will continuously preserve the system message that you set up via the initial system instruction or AI Character
Image without caption

Make use of context limit on TypingMind

Using context limit enables the AI model to remember only a certain number of recent messages.
Image without caption
So, if your conversation history isn't that crucial, you can limit the AI's memory to only the x number of latest messages.
πŸ’‘
With that being said, the AI can still remember and give answers as you guide it in the system instruction. It just forgets your previous messages.