And I mean a true beast that can do anything from gaming, video editing, rendering and so on
To cover all possible HEDT use cases? I’d probably go with a 4U server chassis with dual Epyc 9575F’s, 1TB of RAM, 6x 8TB u.2 NVME drives, 6x 20TB SAS HDDs, 3x dual port 100Gb Fiber NICs, and 3x dual port USB 4 cards. Then for GPUs go with 1x 7950xtx, 1x RTX 5090, and 1x Radeon Pro V710.
Then I’d divide the CPU cores and RAM between three VMs running on Proxmox, with the number of cores and amount of RAM each VM gets being based on the specific workload. I’d use PCI pass through for the NVME Storage, NICs, USB cards, and GPUs which would be split evenly between the VMs. The SAS storage would get one dynamically sized VHD for each VM.
Then I’d have the 7950xtx VM run Fedora. I’d use it for gaming and everyday use.
The V710 VM would run RHEL. I’d use it for development and AI workloads.
The 5090 VM would run Windows 11. I’d use it for video editing, CAD, rendering, and other graphically intense pro workloads.
The PC itself would be water cooled, with the radiator on the door of the rack and a rackmount AC unit blowing chilled air through it. The exhaust from the AC and the PC case fans would be ducted outside.
For access to the VMs, I’d use a rack mount USB4/DP KVM switch and USB4 fiber cables to three docks on my desk connected to appropriate monitors, keyboards, and mice for their respective tasks.
For access to the PC itself, I’d use a rack mount terminal and an additional low end GPU, probably an Intel ARC card of some sort.
All of that would cost somewhere in the realm of $100,000 USD. Not to mention the fact that I’d have to buy and probably remodel a house to accommodate it.
Liquid cooled with a chiller unit, yes? On the roof or somewhere the noise wouldn’t bother anyone.
You can then use the rads on the door in reverse and blow in cold air.
The entire server room would be a humidity controlled clean room, some condensation doesn’t matter. The clean room would also more or less eliminate dust.
A completely redesigned PC based on free and open standards and low latency technologies throughout to maximize real time capabilities.
As it is, the busses are too slow to enable for instance a good PC based oscilloscope. And there is not an open bus like for instance on Raspberry Pi.
It would need to abandon X86, maybe a modern Alpha could do it. Alpha is basically a better API than X86, and they were way faster, until they were cancelled to promote Itanium instead. Risc V would be another option.Such a project however, would probably have little chance of succeeding no matter how good. Because software would be needed for success, and to attract software it needs to be successful.
But it should be a system that invites creativity through openness and capabilities. A bit like the Amiga allowed by being open in the 80’s, but not owned by any single company. Amiga did all sorts of stuff no other computers were capable of, because it was open and better at real time tasks than anything Intel had.
The price estimate for the project including designing new CPU and Custom chips, would be about $10-50 billion. And probably closer to the 50!!!
It would be this one.
Read the stuff below. What a super lucky person… seems like every step of his life lands him in front of a better opportunity that quickly turns into a bigger success. His point about how the computers were cheap and comparing it to the wood costs is interesting, but the difference here is 10 to 15 years later he may have to replace all those computers with new ones while the wood is basically forever. I think that’s the worst part, knowing that all that money spent will just be obsolete soon enough and do you keep doing it? If you got the money I guess the answer is absolutely!
That’s sick, but seven figures holy shit!
That is AMAZING
Why? It’s still the used lenovo thinkcentre tiny for 100€ plus a RAM & SSD upgrade!
Just built a dual socket rome (only populated 1 right now), with a 7c13 off ebay.
The leap from my dual skylake is insane, skylake cores weren’t that impressive at all.
9950x3d and a 5090 with 64gb 6400cl28
If mony was no issue I’d have an automatically scaling sever contract, and only have basic terminals for my own computer(s).
I was going to say a data center but your idea is better
Money is no object:
Macstudio, 10g networking, sunshine streaming
Amd x3d, 4090, 10g networking, sunshine streaming
Stream whichever is best for your application to your desktop/laptop, etc