Virtualization

Showcasing my work in information technology and cybersecurity fields.

When planning an in-house virtualization environment, the goal was to find a platform that balanced performance, flexibility, and cost—without locking key features behind enterprise paywalls. Several hypervisors were evaluated, including VMware vSphere, Docker, and Proxmox VE. Each offered unique advantages, but only one checked all the boxes for a scalable, GPU-ready, multi-node infrastructure.

VMware vSphere came with undeniable pedigree. Its stability, HA features, and enterprise integrations made it a natural first contender. Clustering was solid, and performance monitoring was robust. However, many of its most valuable features were tied to expensive licensing tiers. For non-commercial use or self-hosted lab setups, this became a major limiting factor. It was powerful—but with strings attached.

a group of cubes that are on a black surface
a group of cubes that are on a black surface
a golden docker logo on a black background
a golden docker logo on a black background

Docker brought a different appeal through lightweight containerization. It was ideal for spinning up microservices and testing isolated workloads quickly. Portability and spin-up time were unbeatable. But when it came to clustering, fault tolerance, and persistent storage, Docker’s built-in Swarm mode felt too lightweight. Kubernetes offered more power but at the cost of added complexity, and neither solution was well-suited for GPU passthrough or deeper hardware integration out of the box.

Proxmox VE emerged as the clear winner after thorough comparison. It offered full KVM virtualization alongside LXC containers, a clean and intuitive web UI, and no reliance on enterprise keys to unlock essential features. With a strong open-source foundation and support for both VMs and containers, Proxmox provided the flexibility of Docker and the performance reliability of vSphere—all without vendor lock-in.

the letters are made up of different shapes
the letters are made up of different shapes

The choice wasn’t just theoretical—Proxmox was deployed in a multi-node cluster, enabling true high availability and failover. Clustering was seamless, leveraging Corosync for communication and Ceph for distributed storage. The deployment included GPU passthrough, allowing nodes to tap into dedicated NVIDIA cards for machine learning and AI workloads. This gave the environment not only flexibility for standard virtualization, but also the hardware acceleration needed to support modern AI integration.

From snapshotting and live migration to built-in backup and restore options, Proxmox delivered features often reserved for expensive platforms—without the cost or complexity. Its native support for PCI passthrough made GPU integration surprisingly straightforward, and the ability to assign hardware directly to VMs opened the door for TensorFlow, PyTorch, and other AI tools to run natively in virtualized environments.

text
text

Proxmox wasn’t just selected—it was adopted as the core of the infrastructure stack. The platform enabled a unified virtualization environment where critical VMs, dev containers, and AI-powered experiments could coexist and scale. It provided freedom to innovate, test, and deploy without friction.

In the end, Proxmox turned out to be more than just an open-source solution—it was the foundation for a future-forward, hybrid infrastructure, designed for performance, experimentation, and growth.