Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Run Models

ollama run llama3.1:8b
ollama run deepseek-r1:7b
ollama run qwen2.5:7b

API Access

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1:8b",
  "prompt": "Hello!"
}'

Hardware Requirements

7B models: 8GB RAM + 6GB VRAM. 13B models: 16GB RAM + 10GB VRAM. 70B models: 64GB RAM + 40GB VRAM.