Just got all the hardware set up and working today, super stoked!

In the pic:

  • Raspberry Pi 5
  • Radxa Penta SATA hat for Pi
  • 5x WD Blue 8TB HDD
  • Noctua 140mm fan
  • 12V -> 5V buck convertor
  • 12V (red), 5V (white), and GND (black) distribution blocks

I went with the Raspberry Pi to save some money and keep my power consumption low. I’m planning to use the NAS for streaming TV shows and movies (probably with Jellyfin), replacing my google photos account (probably with Immich), and maybe steaming music (not sure what I might use for that yet). The Pi is running Raspberry Pi Desktop OS, might switch to the server version. I’ve got all 5 drives set up and I’ve tested out streaming some stuff locally including some 4K movies, so far so good!

For those wondering, I added the 5V buck convertor because some people online said the SATA hat doesn’t do a great job of supplying power to the Pi if you’re only providing 12V to the barrel jack, so I’m going to run a USB C cable to the Pi. Also using it to send 5V to the PWM pin on the fan. Might add some LEDs too, fuck it.

Next steps:

  • Set up RAID 5 ZFS RAIDz1?
  • 3D print an enclosure with panel mount connectors

Any tips/suggestions are welcome! Will post again once I get the enclosure set up.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    2 days ago
    • That power situation looks suspicious. You better know what you’re doing so you don’t run into undercurrent events under load.
    • Use ZFS RAIDz1 instead of RAID 5.
    • ramenshaman@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      1 day ago

      Ultimately I would love to use ZFS but I read that it’s difficult to expand/upgrade. Not familiar with ZFS RAIDz1 though, I’ll look into it. Thanks!

      I build robots for a living, the power is fine, at least for a rough draft. I’ll clean everything up once the enclosure is set up. The 12V supply is 10A which is just about the limit of what a barrel jack can handle and the 5V buck is also 10A, which is about double what the Pi 5 power supply can provide.

      • CmdrShepard49@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 day ago

        Z1 is just single parity.

        AFAIK expanding a ZFS pool is a new feature. Its used in Proxmox but their version hasn’t been updated yet, so I don’t have the ability to try it out yet. It t should be available to you otherwise.

        Sweet build! I have all these parts laying around so this would be a fun project. Please share your enclosure design if you’d like!

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 day ago

          Basically the equivalent of RAID 5 in terms of redundancy.

          You don’t even need to do RAIDz expansion, although that feature could save some space. You can just add another redundant set of disks to the existing one. E.g. have a 5-disk RAIDz1 which gives you the space of 4 disks. Then maybe slap on a 2-disk mirror which gives you the space of 1 additional disk. Or another RAIDz1 with however many disks you like. Or a RAIDz2, etc. As long as the newly added space has adequate redundancy of its own, it can be seamlessly added to the existing one, “magically” increasing the available storage space. No fuss.

          • heatermcteets@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Doesn’t losing a vdev cause the entire pool to be lost? I guess to your point with sufficient redundancy for new vdev 1 drive redundancy whether 3 disks or 5 is essentially the same risk. If a vdev is added without redundancy that would increase risk of losing the entire pool.

            • Avid Amoeba@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 day ago

              Yes, it prevents bit rot. It’s why I switched to it from the standard mdraid/LVM/Ext4 setup I used before.

              The instructions seem correct but there’s some room for improvement.

              Instead of using logical device names like this:

              sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -f
              

              You want to use hardware IDs like this:

              sudo zpool create zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
              

              You can discover the mapping of your disks to their logical names like this:

              ls -la /dev/disk/by-id/*
              

              Then you also want to add these options to the command:

              sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool ...
              

              These do useful things like setting optimal block size, compression (basically free performance), a bunch of settings that make ZFS behave like a typical Linux filesystem (its defaults come from Solaris).

              Your final create command should look like:

              sudo zpool create  -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
              

              You can experiment till you get your final creation command since creation/destruction is nearly instant. Don’t hesitate to create/destroy multiple times till you got it right.

      • eneff@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 day ago

        ZRAID expansion is now better than ever before!

        In the beginning of this year (with ZFS 2.3.0) they added zero-downtime expansion along with some other things like enhanced deduplication.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        ZFS is so… So much better. In every single way. Change now before it’s too late, learn and use the features as you go.

      • Creat@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        ZFS, specifically RaidZx, can be expanded like and raid 5/6 these days, assuming support from the distro (works with TrueNAS for example). The patches for this have been merged years ago now. Expanding any other array (like a striped mirror) is even simpler and is done by adding VDevs.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    Dust is going to be a problem (well, maybe not that much electrically, but it maks it a pita to keep clean) after some months, especially for the Raspberry Pi.

    Consider getting (or, even better, 3D printing) an enclosure for it at least (maybe the HDDs will be fine as they are since the fan keeps the air moving and dust probably can’t actually settle down on it).

    • ramenshaman@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      I’ve got that covered. I got a filter for the big intake fan. Printing the first batch of enclosure parts now.

  • Allero@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I would argue either RAID 5 or ZFS RAIDz1 are inherently unsafe, since recovery would take a lot of read-write operations, and you better pray every one of 4 remaining drives will hold up well even after one clearly failed.

    I’ve witnessed many people losing their data this way, even among prominent tech folks (looking at you, LTT).

    RAID6/ZFS RAIDz2 is the way. Yes, you’re gonna lose quite a bit more space (leaving 24TB vs 32TB), but added reliability and peace of mind are priceless.

    (And, in any case, make backups for anything critical! RAID is not a backup!)

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    PLA warps over time even at low heat. that said, as long as you have good airflow it shouldn’t be a problem to use it for housing, but anything directly contacting the drives might warp.

    I thought about doing this myself and was leaning towards reusing drive sleds from existing hardware. it’ll save on design and printing time as well as alleviate problems with heat and the printed parts.

    the sleds are usually pretty cheap on ebay, and you can always buy replacements without much effort.

    • ramenshaman@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      Printing the bracket for the drives in PLA now. I designed them to make minimal contact with the drives so I think they’ll be ok. Even in the rough draft setup the 140mm fan seems like overkill to keep them all cool. If the bracket warps I’ll reprint in something else. Polymaker recently released HT-PLA and HT-PLA-GF, which I’ve been eager to try.

      • ramenshaman@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        I’m in the same boat. Based on the things I’ve learned in the last hour or two, ZFS RAIDz1 is just newer and better. Someone told me that ZFS will help prevent bit rot, which is a concern for me, so I’m assuming ZFS RAIDz1 also does this, though I haven’t confirmed it yet. I’m designing my enclosure now and haven’t looked into that yet.

        • Estebiu@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Yup, it does that. You can run a scrub whonever you want and it’ll manually check them. Or you can just open the files and it will check at runtime.

    • r00ty@kbin.life
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      My understanding is that the only issues were the write hole on power loss for raid 5/6 and rebuild failures due to un-seen damage to surviving drives.

      Issues with single drive rebuild failures should be largely mitigated by regular drive surface checks and scrubbing if the filesystem supports it. This should ensure that any single drive errors that might have been masked by raid are removed and all drives contain the correct data.

      The write hole itself could be entirely mitigated since the OP is building their own system. What I mean by that is that they could include a “mini UPS” to keep 12v/5v up long enough to shut down gracefully in a power loss scenario (use a GPIO for “power good” signal). Now, back in the day we had raid controllers with battery backup to hold the cache memory contents and flush it to disk on regaining power. But, those became super rare quite some time ago now. Also, hardware raid was always a problem with getting a compatible replacement if the actual controller died.

      Is there another issue with raid 5/6 that I’m not aware of?

      • ramenshaman@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        they could include a “mini UPS” to keep 12v/5v up long enough to shut down gracefully in a power loss scenario

        That’s a fuckin great idea.

        • r00ty@kbin.life
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          I was looking at doing something similar with my Asustor NAS. That is, supply the voltage, battery, charging circuit myself, and add one of those CH347 USB boards to provide I2C/GPIO etc and just have the charging circuit also provide a voltage good signal that software on the NAS could poll and use to shut down.

          • ramenshaman@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            Nice. For the Pi5 running Pi OS, do you think using a GPIO pin to trigger a sudo shutdown command be graceful enough to prevent issues?

            • r00ty@kbin.life
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              I think so. I would consider perhaps allowing a short time without power before doing that. To handle short cuts and brownouts.

              So perhaps poll once per minute, if no power for more than 5 polls trigger a shutdown. Make sure you can provide power for at least twice as long as the grace period. You could be a bit more flash and measure the battery voltage and if it drops below a certain threshold send a more urgent shutdown on another gpio. But really if the batteries are good for 20mins+ then it should be quite safe to do it on a timer.

              The logic could be a bit more nuanced, to handle multiple short power cuts in succession to shorten the grace period (since the batteries could be drained somewhat). But this is all icing on the cake I would say.

  • justme@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I love it! The power part is what always blocks me. I would like to set up a couple of data ssds, but never know if you actually need the 3.3v part etc, so currently I put a Pico PSU on a muATX board in a way to huge tower.

  • nao@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    so I’m going to run a USB C cable to the Pi

    Isn’t that already the case in the photo? It looks like the converter including all that cabling is only there to get 5v for the fan, but it’s difficult to see where the usb-c comes from

    • ramenshaman@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Good catch. I don’t have my USB-C cable coming from the buck convertor set up yet, waiting on some parts to arrive tomorrow. The USB-C power is currently coming from a separate power supply in this set up. Ultimately, there will be a single 12V barrel jack port to power the whole system.

      • coaxil@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Dude seems super responsive to input and requests, legit might do one if you hit him up. Also some how missed you are running 5 drives, and not 4! My bad

  • remotelove@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    The fan is good, but the orientation seems like it would struggle pushing air between the drives. Maybe a push-pull setup with a second fan?

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I’d at least flip the drive on the right so it’s underside is closer to the fan, as that side gets hotter in my experience, so it would have more effective cooling.

  • Ek-Hou-Van-Braai@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Nice I love it!!

    I also have a “messy” setup like this, looking forward to 3D printing a case and then creating a cooling solution for it

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      RAID 5 is fine, as part of a storage and data management plan. I run it on an older NAS, though It can do RAID 6.

      No RAID is reliable in the sense of “it’ll never fail” - fault tolerance has been added to it over the years but it’S still a storage pool from multiple drives.

      ZFS adds to it’s fault resistance, but you still better have proper backups/redundancy.

        • LifeInMultipleChoice@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 day ago

          Then encrypt the drive(s), and auto run a split command that ensures the data is stored all over. Your launcher can have a built in cat command to ensure it takes longer to start the files, but this way we know when one drive dies, that data is straight fucked

          • SkyezOpen@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            Sitting on a chair with a hammer suspended above your nutsack and having a friend cut the rope at a random time will provide the same effect and surprise with much less effort.