Categories
Posts tagged with "LLMs"
Tunneling Local LLMs to the Cloud with Tailscale
5/24/2025
Hooked up my local LLMs (running with Ollama) to my VPS using Tailscale, letting apps like n8n and other web tools query them over HTTPS. It's been great for cost-effective testing and small-scale workloads before pulling in heavyweight APIs.
Built My Own AI Server at Home — Here’s the Setup and Why It Rocks
4/26/2025
Set up a custom AI server on my home network with a Ryzen 9950X3D, 64GB DDR5 RAM, and an RTX 5090. It's running ComfyUI, OpenWebUI, and Ollama, reachable via static IP and reverse proxy. Here's a breakdown of the build and lessons from the setup.