Skip to content
  • Home
  • AI Fun
    • AI Inference: The Secret to AI’s Superpowers
    • ULTIMATE Local Ai FAQ
    • Build Your Own AI Homelab
    • Run Ollama with NVIDIA GPU in Proxmox VMs and LXC containers
    • Build, Run, and Integrate Your Own LLM with Ollama
    • Best Budget Local Ai GPU
    • Tomato Plant Growing via Agentic AI
    • Unlocking Real-Time AI with VAST InsightEngine
  • Storage
    • VAST White Papers
    • Parallel file systems explained
    • Standards-Based Parallel Global File Systems and Automated Data Orchestration with NFS
    • Parallel vs. Distributed File Systems
    • Comparison of distributed file systems
    • FAST ’25 – GeminiFS: A Companion File System for GPUs
  • Network
    • Future Proof IO – When to Use RDMA
    • NVMe over Fabrics Demystified
    • RoCE Introduction
    • RDMA over Converged Ethernet (RoCE)
    • Configuring InfiniBand and RDMA networks
  • Home Lab
    • Proxmox
      • Proxmox Beginner’s Guide
      • Proxmox VE Helper Scripts
      • How to Enable GPU Passthrough in Proxmox VE
      • How to setup GPU Passthrough on Proxmox and run ANY AI Model!
    • Dell 5820 PCI Slot Layout
    • Good 5820 Video
    • Nvidia GTX – This AI Supercomputer can fit on your desk
  • Containers
    • Kubernetes
      • Cluster Architecture
      • MicroK8s basics
      • Artifact Hub
    • Docker
      • Install Docker Engine on Ubuntu
      • How To Run GPU-Enabled Containers in Your Home Lab
    • HELM
      • Introduction to Helm
      • What is Helm in Kubernetes?
      • You will never forget HELM after watching this
  • Linux
    • Linux File System Structure Explained
    • 13 Linux Commands Every Engineer Should Know
    • Linux Commands Cheat Sheet
  • Brenden
    • How To Learn So Fast It’s Almost Unfair
    • Network 101
    • Docker Crash Course for Absolute Beginners
    • What is a large language model?
  • GitHub
    • ollama – GitHub
    • Git Artificial Intelligence
    • Git Trending
  • About Scotty March
    • WordPress

Scotty's Tech Hangout!

Cool stuff that I want to learn and remember!

Tag: homelab

Setting Up Ollama and Open WebUI with GPU Passthrough on Proxmox

 January 18, 2026

My First Local AI Setup: Running Ollama in My Home Lab

 January 18, 2026

From Server Rack to Workstation: How I Cut My Home Lab Costs by over $200/Month.

 December 30, 2025

Categories

  • AI
  • HomeLab
  • NVME oF
  • ollama
  • Proxmox
  • Scotty March
  • Tail Latency

Tags

AI GPU GPU Passthrough homelab infiniband latency nvme ollama Proxmox Written by AI

Awesome Blogs & Creators

  • Allie K. Miller
  • Tech World with Nana
  • Hugging Face Blog
  • Machine Learning Mastery
  • Blocks & Files
  • Network Chuck

Recent Posts

  • Tail Latency: Why Your Fastest System Is Only As Fast As Its Slowest Request
  • My InfiniBand Learning Journey
  • Top 10 Ollama Models: What They’re Good For and What You Need to Run Them
  • Setting Up Ollama and Open WebUI with GPU Passthrough on Proxmox
  • My First Local AI Setup: Running Ollama in My Home Lab

Copyright © 2026 Scotty's Tech Hangout!

Design by ThemesDNA.com