Ollama is an open-source project that allows you to run many LLM models locally on your device. You can easily config Typing Mind to run with Ollama, below is the full guide:Documentation Index
Fetch the complete documentation index at: https://docs.typingmind.com/llms.txt
Use this file to discover all available pages before exploring further.
This instruction is for the TypingMind Web version (https://www.typingmind.com). For the macOS version and Setapp version, due to Apple’s security policy, requests to
http protocol are blocked. If you want to connect to the macOS app, you can still follow the instructions here, but with one additional step: you need to set up HTTPS for Ollama. This can be done using various techniques (e.g., using a local HTTPS proxy). For more details on how to run Ollama on HTTPS, please reach out to the Ollama project for support.Download Ollama
Go to https://ollama.com/ and download Ollama to your device.
Set up Ollama environment variables for CORS
Run the following commands so that Ollama allows connection from Typing Mind.llama2 )
Add Ollama models to Typing Mind
Open Typing Mind and open the Model Setting button, then click “Add Custom Model”. Then enter the details as show in the screenshot below:
Chat with Ollama
Once the model is tested and added successfully, you can select the custom model and chat with it normally.
