Hello World

Be Happy!

install ollama at localhost



1. install Ollama
https://github.com/ollama/ollama
curl -fsSL https://ollama.com/install.sh | sh

2. Run Server with model
# or ollama run deepseek-r1
ollama run llama3.2

3. Ask AI with curl
curl http://localhost:11434/api/generate -d '{                                                                                                             ✔  54s  base   16:03:35 
  "model": "llama3.2",
  "prompt":"coding simple tetris with javascript", "stream": false
}'

4. Install GUI for Ollama - Open WebUI
https://github.com/open-webui/open-webui?tab=readme-ov-file#installation-with-default-configuration
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
#ai (4) #ollama (1)
List