Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.typingmind.com/llms.txt

Use this file to discover all available pages before exploring further.

TypingMind lets you access and manage multiple AI models from different providers in one unified workspace.

Official Model Support

TypingMind officially supports LLMs from 12 AI providers, including:
  1. OpenAI: GPT models
  2. Anthropic Claude: Claude models
  3. Google Gemini: Gemini models
  4. OpenRouter: models from multiple providers
  5. DeepInfra: open-source models such as Llama, Qwen, DeepSeek
  6. DeepSeek: DeepSeek models
  7. Groq: fast inference models such as Llama, Qwen
  8. Mistral: Mistral models
  9. Moonshot: Kimi models
  10. Perplexity: Sonar models
  11. xAI: Grok models
  12. Z.ai: GLM models

Custom Models

If your models/providers are not available in the official support list, you can add custom models on TypingMind, here are some examples:
  1. Novita AI: open-source models such as Llama, Qwen, DeepSeek
  2. Azure OpenAI: OpenAI models hosted on Azure, such as GPT-5, GPT-4, o-series models
  3. Azure Foundry: models from providers such as OpenAI, Mistral, Meta, Cohere, DeepSeek, Phi
  4. Chutes AI: open-source models such as Llama, Qwen, DeepSeek, GLM
  5. Fireworks AI: open-source models such as Llama, Qwen, DeepSeek, Mixtral
  6. Minimax: MiniMax models such as MiniMax-M1, MiniMax-Text, MiniMax-VL
  7. Hugging Face: open-source models such as Llama, Qwen, Mistral, Gemma, DeepSeek
  8. AWS Bedrock Anthropic: Claude models such as Claude Opus, Claude Sonnet, Claude Haiku
  9. Ollama: local models such as Llama, Qwen, Mistral, Gemma, DeepSeek
  10. LM Studio: local models such as Llama, Qwen, Mistral, Gemma, DeepSeek
  11. LocalAI: local OpenAI-compatible models such as Llama, Mistral, Qwen
  12. Jina Deep Research: Jina DeepSearch / Deep Research models