llama.swap Model Switcher Quickstart for OpenAI-Compatible Local LLMs

llama.swap Model Switcher Quickstart for OpenAI-Compatible Local LLMs

Hot-swap local LLMs without changing clients.

Soon you are juggling vLLM, llama.cpp, and more—each stack on its own port. Everything downstream still wants one /v1 base URL; otherwise you keep shuffling ports, profiles, and one-off scripts. llama-swap is the /v1 proxy before those stacks.

LocalAI QuickStart: Run OpenAI-Compatible LLMs Locally

LocalAI QuickStart: Run OpenAI-Compatible LLMs Locally

Self-host OpenAI-compatible APIs with LocalAI in minutes.

LocalAI is a self-hosted, local-first inference server designed to behave like a drop-in OpenAI API for running AI workloads on your own hardware (laptop, workstation, or on-prem server).