• 1 Post
  • 10 Comments
Joined 2 months ago
cake
Cake day: December 12th, 2025

help-circle
  • h3ron@lemmy.ziptoSelfhosted@lemmy.worldk8s storage (CSI)
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    I have two storage nodes and one is much faster than the other.

    I’m currently evaluating a juicefs deployment based on two minio instances (one per node, replicated with async bucket replication) through a load balancer (sidekick) in failover. Because juicefs also needs a db for metadata, I went with valkey + sentinel.

    Juicefs provides a CSI driver that supports ReadWriteMany volumes and CSI snapshots and manages both read and write cache. Performance is much much better than Ceph. In theory it should be riskier (because of the async replication) but in practice I haven’t yet lost a bit.










  • From the top:

    • 3 Chinese 2.5Gbit managed switches branded Horaco
    • 3 Chinese N100 “NAS” ITX boards (the cheaper green ones). They are in a Proxmox hyperconverged cluster (HCI)… aka Proxmox + Ceph.
      • Each one has a Pico PSU
      • a PCIE card (mounted on an right angle PCIE extender) with 2 additional 2.5Gb realtek NICs
      • 2 NVMe drives (mirrored boot drives)
      • a SATA SSD for Ceph
    • an empty shelf for a ITX board (an AM4 with a bunch of NVMe drives I have yet to move from my previous rack)
    • the last shelf can accomodate:
      • an automotive power distribution that feeds 12V to the switches and the N100 boards
      • a couple of 12V to USB PD boards, that I use to power the type c devices on the Rack shelves on the back
      • a (missing) TFX PSU that will power the AM4 board
      • a second TFX PSU that feeds 12V into the distribution blocks and powers basically anything else.

    I also have some rack shelves on the back:

    Needless to say I bought everything before the DRAM craze and I feel very sad for who has to work with the current market.

    Everything is mounted on custom 2U or 3U 3D printed 10" rack shelves.