April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini

https://news.ycombinator.com/rss Hits: 11
Summary

April 2026 TLDR Setup for Ollama + Gemma 4 26B on a Mac mini (Apple Silicon) Mac mini with Apple Silicon (M1/M2/M3/M4/M5) At least 24GB unified memory for Gemma 4 26B macOS with Homebrew installed Install the Ollama macOS app via Homebrew cask (includes auto-updates and MLX backend): brew install --cask ollama-app This installs: Ollama.app in /Applications/ ollama CLI at /opt/homebrew/bin/ollama The Ollama icon will appear in the menu bar. Wait a few seconds for the server to initialize. Verify it's running: This downloads ~17GB. Verify: ollama list # NAME ID SIZE MODIFIED # gemma4:26b 5571076f3d70 17 GB ... ollama run gemma4:26b "Hello, what model are you?" Check that it's using GPU acceleration: ollama ps # Should show CPU/GPU split, e.g. 14%/86% CPU/GPU Step 5: Configure Auto-Start on Login 5a. Ollama App — Launch at Login Click the Ollama icon in the menu bar > Launch at Login (enable it). Alternatively, go to System Settings > General > Login Items and add Ollama. 5b. Auto-Preload Gemma 4 on Startup Create a launch agent that loads the model into memory after Ollama starts and keeps it warm: cat << 'EOF' > ~/Library/LaunchAgents/com.ollama.preload-gemma4.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.ollama.preload-gemma4</string> <key>ProgramArguments</key> <array> <string>/opt/homebrew/bin/ollama</string> <string>run</string> <string>gemma4:26b</string> <string></string> </array> <key>RunAtLoad</key> <true/> <key>StartInterval</key> <integer>300</integer> <key>StandardOutPath</key> <string>/tmp/ollama-preload.log</string> <key>StandardErrorPath</key> <string>/tmp/ollama-preload.log</string> </dict> </plist> EOF Load the agent: launchctl load ~/Library/LaunchAgents/com.ollama.preload-gemma4.plist This sends an empty prompt to ollama run every 5 minutes, keeping the model warm in memory. 5c. Keep Models L...

First seen: 2026-04-03 11:11

Last seen: 2026-04-03 21:16