How Training Data works in Typing Mind Custom

Note: The Training Data feature is currently in Early Access. Many more changes will be made in the near future on how the training is done for the AI assistant. We’ll make announcements via email, our Blog, and via our Discord channel when there is any change in this document.
Table of contents:

Setup training documents for your chat instance.

Go to the Admin Panel β†’ Training Data. Here you can setup your training documents by three ways:
  1. Upload files up to 50MB per file. Supported format: PDF, DOCX, TXT, CSV.
  1. Set a system instruction (limited by the model context length)
  1. Pulling data from other services (Notion, Intercom, Google Drive like Google Docs, Google Sheet, etc.)
Image without caption

How training data is provided to the assistant.

Training via Uploaded Files

The AI assistant gets the data from uploaded files via a vector database. Here is how the files are processed:
  1. Files are uploaded.
  1. We extract the raw text from the files and try our best to preserve the meaningful context of the file.
  1. We split the text into chunks of roughly 3,000 words per chunk with some overlap. The chunks are separated and split in a way that preserves the meaningful context of the document. (Note that the chunk size may change in the future, as of now, you can’t change this number).
  1. These chunks are stored in a database.
  1. When your users send a chat message, the system will try to retrieve up to 5 relevant chunks from the database (based on the content of the chat so far) and provide that as a context to the AI assistant via the system message. This means the AI assistant will have access to the 5 most relevant chunks of training data at all time during a chat.
  1. The β€œrelevantness” of the chunks is decided by our system and we are improving this with every update of the system.
  1. The AI assistant will rely on the provided text chunks from the system message to provide the best answer to the user.
All of your uploaded files are stored securely on our system. We never share your data to anyone else without informing you beforehand.

Training via connected sources (Notion, Intercom,…)

Image without caption
Training the AI bot with your internal sources
In addition to uploading files, you can also connect external data sources, such as Notion, Intercom, etc., to train your AI assistant.
  • Connect your data source: link your desired external data source (e.g., Notion, Intercom, etc.) to your AI assistant.
  • Data extraction and chunking: The process of data extraction and chunking works the same way as it does for uploaded files. The system extracts the raw text, preserves the meaningful context, and splits the text into manageable chunks.
  • Data Refresh: the system will refresh the data from the connected sources once per day. This ensures that your AI assistant always has access to the most up-to-date information.
Image without caption

Training via System Message

All data provided in the system message will be passed to the AI assistant in full.
Training via System Message will usually have the highest priority. Sometimes, the AI assistant may decide to not following the instruction or use the training data from this message due to hallucination or other reasons. This entirely depends on the model you use and the quality of the model.
Check the β€œExample” button to see some examples of how to use the system message for training data.
This method of training is also limited by the context length of the model (because all the text here is provided to the AI assistant in full)
Image without caption

Be aware of Prompt Injection attacks

By default, all training data are not visible to the end users.
However, all LLM models are subject to Prompt Injection attacks. This means the user may be able to read some of your training data.

Best practices to provide training data

  1. Use raw text in Markdown format if you can. LLM model understands markdown very well and can make sense of the content much more efficient compare to PDFs, DOCX, etc.
  1. Use both Upload Files and System Instruction. A combination of a well-prompted system instruction and a clean training data is proven to have the best result for the end users.
  1. Stay tuned for quality updates from us. We improve the trainnig data processing and handling all the time. We’re working on some approaches that will guarantee to provide much better quality overall for the AI assistant to lookup training data. Be sure to check our updates at our Blog and our Discord.


By default, you can upload up to 1M characters as training data, if your training data exceeds this number, you can go to the Billing page to buy more training characters.
That’s all!
Happy chatting!
Last update: 6 Mar 2024.