I can’t overstate how much I hate GitHub Actions. I don’t even remember hating any other piece of technology I used. Sure, I still make fun of PHP that I remember from times of PHP41, but even then I didn’t hate it. Merely I found it subpar technology to other emerging at the time (like Ruby on Rails or Django). And yet I hate GitHub Actions.
With Passion2.

Road to Hell
Day before writing these words I was implementing build.rs for my tmplr project. To save you a click - it is a file/project scaffold tool with human readable (and craftable) template files. I (personally) use it very often, given how easy it is to craft new templates, by hand or with aid of the tool, so check it out if you need a similar tool.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    1 day ago

    the whole loop still took around 2-3 minutes to execute.

    FOR. A. SINGLE. CHANGE.

    Yes. For a single change. Like having an editor with 2 minute save lag,

    Damn you’re running a whole production pipeline and it only takes two minutes? That’s pretty good. I’ve worked with projects that take tens of minutes, if not hours, just to compile.

    Now if I was running some dinky little solo dev project, I’d probably just use some system-local CI thing for rapid iteration, if my changes needed to go through CI at all. Maybe Jenkins if I was feeling fancy. But a big project with a bunch of users on a remote platform? Getting a result in just 2-3 minutes is awesome.

    • setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Damn you’re running a whole production pipeline and it only takes two minutes? That’s pretty good. I’ve worked with projects that take tens of minutes, if not hours, just to compile.

      At work we have CI runs that take almost a week. On fairly powerful systems too. Multiple decades of a “no change without a test case” policy in a large project combined with instrumented debug builds…

      Tbf we don’t run those on every single change though. The per change ones take a couple hours only.

    • Jeena@piefed.jeena.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      We do it in several ‘stages’, we have a check pipeline to just compile a single component and run the unit tests, that takes perhaps 5 minutes.

      Then we build a incremental AOSP build with the change on top. That takes about 40 minutes.

      Then we run the incremental build together with all the other changes for the Das and Do a manual smoke test that the most important stuff works and when it does only then we merge all those changes from the previous day. That takes about two to three hours.

      Then there is the nightly test where we build the latest main branch and do static code analysis. That takes forever like 4 hours or so.

      Then there are release builds from scratch which also run all the google compliance tests for AOSP and those things run practically for more than a day.

      It’s a interesting test of your personal patiance :D. But I don’t think it’s possible to do it with GitHub Actions, we use zuul for it like BMW and Volvo: https://www.youtube.com/watch?v=Z8rofKRen3w

  • 9point6@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    FOR. A. SINGLE. CHANGE. Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes

    Am I reading this bit correctly? Are they complaining about testing a CI change and it only taking a couple of minutes to verify?

    And this person’s using a compiled language?

  • yaroto98@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 day ago

    Huh, I was expecting more. There’s so much to hate with github actions!

    • Sometimes you can pass a list, or boolean, but for composite actions you can only pass strings.
    • Open bugs that github actions just doesn’t care to fix (I’ve run across about 3). Most recently, concurrency flag cancel_in_progress doesn’t work, and they aren’t fixing it.
    • variables often not accessable until next step.
    • API is slow to update. Running jobs querying themselves won’t see themselves as running 50% of the time
    • Inability to easily pass vars from one job to another. (output in step, output from job, needs, call) it’s 4 lines of code to get a single var accessable in another job.
    • UI doesn’t show startup errors. Depending on the error if you make a dumb syntax error in the workflow file, the UI will just say failed to startup. Won’t tell you what happened, won’t even link it to your PR which kicked it off, you have to go hunting for it.
    • Workflow Dispatch is a joke. Can’t run it in a branch, no dynamic inputs, no banners.
    • Can’t run schedules in branches.
    • Inconsistent Event Data labels and locations across triggers. Want to get the head sha? It’s in a different place for each trigger, same for so many things.
    • Merge Queues have the worst Event Data. They run off a autogenerated branch, and so they fill everything in with actor=mergequeuebot and garbage that is unhelpful. Good luck trying to get the head sha and look up the real info like say the branch name you’re merging in. You have to parse it out from a head_ref’s description or some junk.
    • No dynamic run names. Well, you can, but you have to call the api and update it. It’s a hassle. Why not just let me toss in an @actor, or @branch in the run name? That way when a dev is looking for their instance of “Build Job” from a massive list, they can actually find theirs.
    • garbage documentation

    I could go on. I do CI/CD for work and gha is the tool they are having us use. I have no say in the matter.

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      There’s not to my knowledge a good way to run/test GitHub actions locally. So if I want to verify my change uploads the coverage report after the end of the pipeline, I have to run the whole thing. And then I find an error because on the GitHub runner blah blah is different

      • namingthingsiseasy@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        The best way I found to do this is by commenting out the portions of the build that take the longest.

        Which is stupid, but that’s what you get with Microsoft products.

        (I get that there may be ways to test this locally, but I found this method to be the easiest.)

      • yaroto98@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        You can install the github actions runner locally and use it, however all that does is eat your cpu cycles and prevent them from charging you. It doesn’t help you debug that blackbox at all.

      • yaroto98@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Not saying it’s perfect, but every job I’ve been at they’re migrating away from Jenkins. And they never have a reason to do so other than shiny new toy. Jenkins has it’s own problems, but I personally think it’s litterally decades ahead of github actions.

        I do like runners better than the default jenkins run baremetal on the server, however the runners are too blackbox. I wish there was a debug toggle on runners. Pause at step, then provide a console into the runner. Some runs litterally take hours, so adding some debug output, and rerunning makes troubleshooting tedious.

        • namingthingsiseasy@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 hours ago

          I’ve found the edit/test/debug loop in Jenkins to be much faster than Github Actions. It was quite a refreshing change when I made that transition.

          • yaroto98@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            Yep, I think the only thing github actions has over jenkins is built in versioning. I wish in jenkins I could edit a pipeline and easily roll it back. Or even better have tags so if I break something the team can just use the previous tag while I figure it out.

            • namingthingsiseasy@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 hours ago

              Interesting. Were you using a Jenkinsfile? I’m not sure I completely understand your use case, but using a Jenkinsfile would mean that your entire pipeline would be defined in a file in source control, so you could roll it back if you made a change that didn’t work quite right. Seems to be what your looking for if I’m understanding what you’re looking for.

              https://www.jenkins.io/doc/book/pipeline/jenkinsfile/

        • dublet@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          And they never have a reason to do so other than shiny new toy.

          Security. Jenkins has issues with every other plugin being a backdoor or version having some vulnerability.

          • yaroto98@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            And the Actions in the marketplace aren’t?

            My employers have only allowed a very small subset of each. It’s super frustrating having to reinvent the wheel constantly.

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              I wonder if problems could be mostly avoided by running potentially-unsafe code in a container without network access.

          • dublet@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Never found act useful. Where I work, we have our own self hosted instance, including self hosted runners and it doesn’t really improve the situation WRT debugging an Action.

      • Jeena@piefed.jeena.netOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I like zuul quite a lot, it’s a bit complicated to set up first but once it’s runnint’s really cool, especially the gating mechanisms can’t be found anywhere else and the dependencies between jobs are very intuitive too.

  • tofubl@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    Github Actions really are horrible to work with. If I could spin up a container and test the commands on the fly that would make things so much easier. But having to do the commit push refresh webpage insanity every time… It is really cool when the pipeline works, but getting there is very painful.

    • dublet@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      If I could spin up a container and test the commands on the fly that would make things so much easier

      You can, if you use docker based actions.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes4 or playing chess over snail-mail. It’s 2026 for Pete’s sake, and we5 won’t tolerate this behavior!

    Now of course, in some Perfect World, GitHub could have a local runner with all the bells and whistles. Or maybe something that would allow me to quickly check for progress upon the push6 or even something like a “scratch commit”, i.e. a way that I could testbed different runs without polluting history of both Git and Action runs.

    For the love of all that is holy, don’t let GitHub Actions manage your logic. Keep your scripts under your own damn control and just make the Actions call them!

    I don’t use GitHub Actions and am not familiar with it, but if you’re using it for continuous integration or build stuff, I’d think that it’s probably a good idea to have that decoupled from GitHub anyway, unless you want to be unable to do development without an Internet connection and access to GitHub.

    I mean, I’d wager that someone out there has already built some kind of system to do this for git projects. If you need some kind of isolated, reproducible environment, maybe Podman or similar, and just have some framework to run it?

    like macOS builds that would be quite hard to get otherwise

    Does Rust not do cross-compilation?

    searches

    It looks like it can.

    https://rust-lang.github.io/rustup/cross-compilation.html

    I guess maybe MacOS CI might be a pain to do locally on a non-MacOS machine. You can’t just freely redistribute MacOS.

    goes looking

    Maybe this?

    https://www.darlinghq.org/

    Darling is a translation layer that lets you run macOS software on Linux

    That sounds a lot like Wine

    And it is! Wine lets you run Windows software on Linux, and Darling does the same for macOS software.

    As long as that’s sufficient, I’d think that you could maybe run MacOS CI in Darling in Podman? Podman can run on Linux, MacOS, Windows, and BSD, and if you can run Darling in Podman, I’d think that you’d be able to run MacOS stuff on whatever.

    • setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      You could also just only use Macs. In theory ARM Macs let you build and test for macOS (host or vm), Linux (containers or vm), Windows (vm), iOS (simulator or connected device), and Android (multiple options), both ARM and x86-64.

      At least in theory. I think in practice I’d go mad. Not from the Linux part though. That part just works because podman on ARM Macs will transparently use emulation for x86 containers by default. (You can get the same thing on Linux too with qemu-user-static btw., for a lot more architectures too.)

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        23 hours ago

        You could also just only use Macs.

        I actually don’t know what the current requirement is. Back in the day, Apple used to build some of the OS — like QuickDraw — into the ROMs, so unless you had a physical Mac, not just a purchased copy of MacOS, you couldn’t legally run MacOS, since the ROM contents were copyrighted, and doing so would require infringing on the ROM copyright. Apple obviously doesn’t care about this most of the time, but I imagine that if it becomes institutionalized at places that make real money, they might.

        But I don’t know if that’s still the case today. I’m vaguely recalling that there was some period where part of Apple’s EULA for MacOS prohibited running MacOS on non-Apple hardware, which would have been a different method of trying to tie it to the hardware.

        searches

        This is from 2019, and it sounds like at that point, Apple was leveraging the EULAs.

        https://discussions.apple.com/thread/250646417?sortBy=rank

        Posted on Sep 20, 2019 5:05 AM

        The widely held consensus is that it is only legal to run virtual copies of macOS on a genuine Apple made Apple Mac computer.

        There are numerous packages to do this but as above they all have to be done on a genuine Apple Mac.

        • VMware Fusion - this allows creating VMs that run as windows within a normal Mac environment. You can therefore have a virtual Mac running inside a Mac. This is useful to either run simultaneously different versions of macOS or to run a test environment inside your production environment. A lot of people are going to use this approach to run an older version of macOS which supports 32bit apps as macOS Catalina will not support old 32bit apps.
        • VMware ESXi aka vSphere - this is a different approach known as a ‘bare metal’ approach. With this you use a special VMware environment and then inside that create and run virtual machines. So on a Mac you could create one or more virtual Mac but these would run inside ESXi and not inside a Mac environment. It is more commonly used in enterprise situations and hence less applicable to Mac users.
        • Parallels Desktop - this works in the same way as VMware Fusion but is written by Parallels instead.
        • VirtualBox - this works in the same way as VMware Fusion and Parallels Desktop. Unlike those it is free of charge. Ostensible it is ‘owned’ by Oracle. It works but at least with regards to running virtual copies of macOS is still vastly inferior to VMware Fusion and Parallels Desktop. (You get what you pay for.)

        Last time I checked Apple’s terms you could do the following.

        • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of doing software development
        • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of testing
        • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of being a server
        • Run a virtualised copy of macOS on a genuine Apple made Mac for personal non-commercial use

        No. Apple spells this out very clearly in the License Agreement for macOS. Must be installed on Apple branded hardware.

        They switched to ARM in 2020, so unless their legal position changed around ARM, I’d guess that they’re probably still relying on the EULA restrictions. That being said, EULAs have also been thrown out for various reasons, so…shrugs

        goes looking for the actual license text.

        Yeah, this is Tahoe’s EULA, the most-recent release:

        https://www.apple.com/legal/sla/docs/macOSTahoe.pdf

        Page 2 (of 895 pages):

        They allow only on Apple-branded hardware for individual purchases unless you buy from the Mac Store. For Mac Store purchases, they allow up to two virtual instances of MacOS to be executed on Apple-branded hardware that is also running the OS, and only under certain conditions (like for software development). And for volume purchase contracts, they say that the terms are whatever the purchaser negotiated. I’m assuming that there’s no chance that Apple is going to grant some “go use it as much as you want whenever you want to do CI tests or builds for open-source projects targeting MacOS” license.

        So for the general case, the EULA prohibits you from running MacOS wherever on non-Apple hardware.

        • setsubyou@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 hours ago

          Yeah, it’s a major pain at my work because our cloud doesn’t support Macs (like e.g. AWS would), so we run a server room with a bunch of Macs that we wouldn’t otherwise need.

    • morriscox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I sent the Darling link to a brother and suggested that he use it with Parallels… I couldn’t resist.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Forgejo actions are supposedly modeled after GHA, but I’ve not used it even though I’m self-hosting Forgejo. I’ve considered trying it out soon.

  • franzbroetchen@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    Man I hate PHP with a burning passion - much, much more than GitHub actions. And yes, they already suck big time. PHP made me question my whole career just by its utter and pure stupidity and atrocious design choices. IMHO nothing comes even close to that programming language in terms of how pathetic it is. Just had to vent for a moment