Open WebUI: Self-Hosted LLM Interface
Self-hosted ChatGPT alternative for local LLMs
Open WebUI is a powerful, extensible, and feature-rich self-hosted web interface for interacting with large language models.
Self-hosted ChatGPT alternative for local LLMs
Open WebUI is a powerful, extensible, and feature-rich self-hosted web interface for interacting with large language models.
Melbourne's essential 2026 tech calendar
Melbourne’s tech community continues to thrive in 2026 with an impressive lineup of conferences, meetups, and workshops spanning software development, cloud computing, AI, cybersecurity, and emerging technologies.
Fast LLM inference with OpenAI API
vLLM is a high-throughput, memory-efficient inference and serving engine for Large Language Models (LLMs) developed by UC Berkeley’s Sky Computing Lab.
Master Go code quality with linters and automation
Modern Go development demands rigorous code quality standards. Linters for Go automate the detection of bugs, security vulnerabilities, and style inconsistencies before they reach production.
Build robust AI/ML pipelines with Go microservices
As AI and ML workloads become increasingly complex, the need for robust orchestration systems has become greater. Go’s simplicity, performance, and concurrency makes it an ideal choice for building the orchestration layer of ML pipelines, even when the models themselves are written in Python.
Deploy enterprise AI on budget hardware with open models
The democratization of AI is here. With open-source LLMs like Llama 3, Mixtral, and Qwen now rivaling proprietary models, teams can build powerful AI infrastructure using consumer hardware - slashing costs while maintaining complete control over data privacy and deployment.
Set up robust infrastructure monitoring with Prometheus
Prometheus has become the de facto standard for monitoring cloud-native applications and infrastructure, offering metrics collection, querying, and integration with visualization tools.
Master Grafana setup for monitoring & visualization
Grafana is the leading open-source platform for monitoring and observability, transforming metrics, logs, and traces into actionable insights through stunning visualizations.
Kubernetes deployments with Helm package management
Helm has revolutionized Kubernetes application deployment by introducing package management concepts familiar from traditional operating systems.
Deploy stateful apps with ordered scaling & persistent data
Kubernetes StatefulSets are the go-to solution for managing stateful applications that require stable identities, persistent storage, and ordered deployment patterns—essential for databases, distributed systems, and caching layers.
Complete security guide - data at rest, in transit, at runtime
When data is the a valuable asset, securing it has never been more critical. From the moment information is created to the point it’s discarded, its journey is fraught with risks - whether stored, transferred, or actively used.
Deploy production-ready service mesh - Istio vs Linkerd
Discover how to implement and optimize service mesh architectures using Istio and Linkerd. This guide covers deployment strategies, performance comparisons, security configurations, and best practices for production environments.
Installing little k3s kubernetes on homelab cluster
Here’s a step-by-step walkthrough of installation of a 3-node K3s cluster on bare-metal servers (1 master + 2 workers).
Very short overview of kubernetes variants
Comparing self-hosting Kubernetes distributions for hosting on bare-metal or home servers, focusing on ease of installation, performance, system requirements, and feature sets.
Step-by-step instructions
Howto: installing Kubernetes using Kubespray, including setting up the environment, configuring the inventory, and running the Ansible playbooks.
Some frequent k8s commands with params
Here is my k8s cheat sheet covering kubenetes most important commands and concepts from installing to running containers and cleaning up: