AI 2025-11-03
Ollama: Run LLMs Locally on Your VPS
Install Ollama
curl -fsSL https://ollama.com/install.sh | shRun Models
ollama run llama3.1:8b
ollama run deepseek-r1:7b
ollama run qwen2.5:7bAPI Access
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1:8b",
"prompt": "Hello!"
}'Hardware Requirements
7B models: 8GB RAM + 6GB VRAM. 13B models: 16GB RAM + 10GB VRAM. 70B models: 64GB RAM + 40GB VRAM.
#Sudofree#Ollama