TypingMind lets you access and manage multiple AI models from different providers in one unified workspace.Documentation Index
Fetch the complete documentation index at: https://docs.typingmind.com/llms.txt
Use this file to discover all available pages before exploring further.
Official Model Support
TypingMind officially supports LLMs from 12 AI providers, including:- OpenAI: GPT models
- Anthropic Claude: Claude models
- Google Gemini: Gemini models
- OpenRouter: models from multiple providers
- DeepInfra: open-source models such as Llama, Qwen, DeepSeek
- DeepSeek: DeepSeek models
- Groq: fast inference models such as Llama, Qwen
- Mistral: Mistral models
- Moonshot: Kimi models
- Perplexity: Sonar models
- xAI: Grok models
- Z.ai: GLM models
Custom Models
If your models/providers are not available in the official support list, you can add custom models on TypingMind, here are some examples:- Novita AI: open-source models such as Llama, Qwen, DeepSeek
- Azure OpenAI: OpenAI models hosted on Azure, such as GPT-5, GPT-4, o-series models
- Azure Foundry: models from providers such as OpenAI, Mistral, Meta, Cohere, DeepSeek, Phi
- Chutes AI: open-source models such as Llama, Qwen, DeepSeek, GLM
- Fireworks AI: open-source models such as Llama, Qwen, DeepSeek, Mixtral
- Minimax: MiniMax models such as MiniMax-M1, MiniMax-Text, MiniMax-VL
- Hugging Face: open-source models such as Llama, Qwen, Mistral, Gemma, DeepSeek
- AWS Bedrock Anthropic: Claude models such as Claude Opus, Claude Sonnet, Claude Haiku
- Ollama: local models such as Llama, Qwen, Mistral, Gemma, DeepSeek
- LM Studio: local models such as Llama, Qwen, Mistral, Gemma, DeepSeek
- LocalAI: local OpenAI-compatible models such as Llama, Mistral, Qwen
- Jina Deep Research: Jina DeepSearch / Deep Research models