To cover all possible HEDT use cases? I’d probably go with a 4U server chassis with dual Epyc 9575F’s, 1TB of RAM, 6x 8TB u.2 NVME drives, 6x 20TB SAS HDDs, 3x dual port 100Gb Fiber NICs, and 3x dual port USB 4 cards. Then for GPUs go with 1x 7950xtx, 1x RTX 5090, and 1x Radeon Pro V710.
Then I’d divide the CPU cores and RAM between three VMs running on Proxmox, with the number of cores and amount of RAM each VM gets being based on the specific workload. I’d use PCI pass through for the NVME Storage, NICs, USB cards, and GPUs which would be split evenly between the VMs. The SAS storage would get one dynamically sized VHD for each VM.
Then I’d have the 7950xtx VM run Fedora. I’d use it for gaming and everyday use.
The V710 VM would run RHEL. I’d use it for development and AI workloads.
The 5090 VM would run Windows 11. I’d use it for video editing, CAD, rendering, and other graphically intense pro workloads.
The PC itself would be water cooled, with the radiator on the door of the rack and a rackmount AC unit blowing chilled air through it. The exhaust from the AC and the PC case fans would be ducted outside.
For access to the VMs, I’d use a rack mount USB4/DP KVM switch and USB4 fiber cables to three docks on my desk connected to appropriate monitors, keyboards, and mice for their respective tasks.
For access to the PC itself, I’d use a rack mount terminal and an additional low end GPU, probably an Intel ARC card of some sort.
All of that would cost somewhere in the realm of $100,000 USD. Not to mention the fact that I’d have to buy and probably remodel a house to accommodate it.
Liquid cooled with a chiller unit, yes? On the roof or somewhere the noise wouldn’t bother anyone.
You can then use the rads on the door in reverse and blow in cold air.
The entire server room would be a humidity controlled clean room, some condensation doesn’t matter. The clean room would also more or less eliminate dust.
To cover all possible HEDT use cases? I’d probably go with a 4U server chassis with dual Epyc 9575F’s, 1TB of RAM, 6x 8TB u.2 NVME drives, 6x 20TB SAS HDDs, 3x dual port 100Gb Fiber NICs, and 3x dual port USB 4 cards. Then for GPUs go with 1x 7950xtx, 1x RTX 5090, and 1x Radeon Pro V710.
Then I’d divide the CPU cores and RAM between three VMs running on Proxmox, with the number of cores and amount of RAM each VM gets being based on the specific workload. I’d use PCI pass through for the NVME Storage, NICs, USB cards, and GPUs which would be split evenly between the VMs. The SAS storage would get one dynamically sized VHD for each VM.
Then I’d have the 7950xtx VM run Fedora. I’d use it for gaming and everyday use.
The V710 VM would run RHEL. I’d use it for development and AI workloads.
The 5090 VM would run Windows 11. I’d use it for video editing, CAD, rendering, and other graphically intense pro workloads.
The PC itself would be water cooled, with the radiator on the door of the rack and a rackmount AC unit blowing chilled air through it. The exhaust from the AC and the PC case fans would be ducted outside.
For access to the VMs, I’d use a rack mount USB4/DP KVM switch and USB4 fiber cables to three docks on my desk connected to appropriate monitors, keyboards, and mice for their respective tasks.
For access to the PC itself, I’d use a rack mount terminal and an additional low end GPU, probably an Intel ARC card of some sort.
All of that would cost somewhere in the realm of $100,000 USD. Not to mention the fact that I’d have to buy and probably remodel a house to accommodate it.
Liquid cooled with a chiller unit, yes? On the roof or somewhere the noise wouldn’t bother anyone.
You can then use the rads on the door in reverse and blow in cold air.
The entire server room would be a humidity controlled clean room, some condensation doesn’t matter. The clean room would also more or less eliminate dust.