• 1 Post
  • 705 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle

  • If the instance or community guidelines state “X isn’t allowed,” then it isn’t censorship to remove X. It becomes censorship when mods start removing things for reasons other than enforcing instance or community guidelines. Until that point, it’s just content moderation.

    If the c/Androids community guidelines state that “This community is about human-like robots. Posts regarding the phone OS are unwelcome” and a mod removes such a post, that isn’t censorship. Likewise for spam, or reposts, or any number of other things.

    On the other hand if the mods remove a post about a human-like robot built in China because they’re sinophobic, that is censorship. Likewise if the human-like robot was built by Tesla, if the lead engineer were a woman, or anything along those lines. Likewise if the post were instead critical of such a robot - still censorship (unless it’s a news only community and the post was free text or a meme).

    Likewise if a community’s guidelines state that controversial statements without reputable sources backing them up, statements known to be false, or statements that have been flagged as false by a fact checker are prohibited, then removing such statements isn’t censorship. It’s moderation.



  • I genuinely don’t understand why people here are taking it so hard that I wish the Immich devs were using semver.

    Because you didn’t say that; you said “Breaking changes in a point release? Not cool” and later “I’m basing this off the guidelines at semver.org.”

    I’m paraphrasing your comments from memory, to be clear, so apologies if I misquoted you.

    It certainly felt to me like you were assuming that this project was using semver and was not following it well, not that you wouldn’t want to use a project that receives this many breaking changes / that doesn’t follow semver. Those complaints both make a lot more sense to me - and I’ve seen many people say similar things about Immich in the past. In fact, it’s a big part of why I haven’t migrated from Photoprism to Immich myself - in this regard they’re complete opposites.


  • I don’t think there’s any room to argue that announcing a 1.x with a change the developers say is a breaking change, which is what Immich have done, fits within the semver.org guidelines.

    That wasn’t the argument.

    Following semver is optional. If a project doesn’t explicitly state it is following semver, it shouldn’t be assumed that it is. With regard to Immich in particular, a cursory review of their documentation makes it clear that they are not following semver. Literally, go to https://immich.app/ and read the text at the very top of the page:

    ⚠️ The project is under very active development. Expect bugs and changes.

    Go to the repo and you’ll see the README, which states at the very top:

    • ⚠️ The project is under very activedevelopment.
    • ⚠️ Expect bugs and breaking changes.

    If you can read that, see that they’re on major version 1 with a minor version over 100, and you still think they’re using semver, then that’s on you.

    The devs have stated they won’t be using semver until they consider Immich production ready, and that moving to a 1.x version from 0.x was a mistake made some time ago. If you want to think about it as though it is semver, consider the major version to still be 0. See https://github.com/immich-app/immich/discussions/5086#discussioncomment-7593227 for example.

    As this project is clearly not following semver, the semver guidelines aren’t applicable and haven’t been violated.

    I don’t think there’s any room to argue

    Even if semver were applicable, in this case, I would still disagree. The text from semver.org states:

    8. Major version X (X.y.z | X > 0) MUST be incremented if any backward incompatible changes are introduced to the public API.

    It doesn’t state that any backward incompatible changes, period, require a major version increase, only changes to the public API. I would personally argue that the deployment configuration is part of the public API, but not all project owners agree with me. Even if they do agree, they might say that this was not a documented deployment configuration and thus not part of the public API, and that it therefore doesn’t necessitate an increase to the major version, but as they knew that people were using that configuration, anyway, they included a note about a potentially breaking change as a courtesy to those users.




  • There have been so many places in front end web dev that used the abbreviation “a11y” without defining it (or explaining the 11) that for years I assumed it was just the name of a particular library that had gotten Kleenexed.

    (To be clear, I’m using “Kleenexed” as a verb here to mean “genericized explosively, as if a sneeze.”)

    It didn’t help to look at the code, either. “Okay cool, so all this does is add a bunch of random extra tags to the DOM? Doesn’t seem super useful but okay, I guess there’s probably some tool out there that depends on them but we probably don’t use it.”





  • This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile

    x-logging:
      &default-logging
      options:
        max-size: '10m'
        max-file: '5'
      driver: json-file
    
    services:
      mongo:
        image: mongo:4.4
        volumes:
          - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
        logging: *default-logging
    
      nightscout:
        image: nightscout/cgm-remote-monitor:latest
        container_name: nightscout
        restart: always
        depends_on:
          - mongo
        logging: *default-logging
        ports:
          - 1337:1337
        environment:
          ### Variables for the container
          NODE_ENV: production
          TZ: [removed]
    
          ### Overridden variables for Docker Compose setup
          # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
          # and manage TLS certificates
          INSECURE_USE_HTTP: 'true'
    
          # For all other settings, please refer to the Environment section of the README
          ### Required variables
          # MONGO_CONNECTION - The connection string for your Mongo database.
          # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
          # The default connects to the `mongo` included in this docker-compose file.
          # If you change it, you probably also want to comment out the entire `mongo` service block
          # and `depends_on` block above.
          MONGO_CONNECTION: mongodb://mongo:27017/nightscout
    
          # API_SECRET - A secret passphrase that must be at least 12 characters long.
          API_SECRET: [removed]
    
          ### Features
          # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
          # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
          ENABLE: careportal rawbg iob
    
          # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
          # When readable, anyone can view Nightscout without a token. Setting it to denied will require
          # a token from every visit, using status-only will enable api-secret based login.
          AUTH_DEFAULT_ROLES: denied
    
          # For all other settings, please refer to the Environment section of the README
          # https://github.com/nightscout/cgm-remote-monitor#environment
    
    

  • To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,

    services:
      nightscout:
        ports:
          - 3000:3000
    

    You can remove the labels as those are used by Traefik, as well as the Traefik service itself.

    Then just point Nginx to that port (e.g., 3000) on your local machine.

    —-

    Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.








  • Fair point, I should have asked about commercial games in general

    That said I didn’t mean that the game studio itself would do the AI training and own their models in-house; if they did, I’d expect it to go just as poorly as you would. Rather, I’d expect the model to be created by an organization specialized in that sort of thing.

    For example, “Marey” is one example I found of a GenAI model that its creators are saying was trained ethically.

    Another is Adobe Firefly, where Adobe says they trained only on licensed and public domain content. It also sounds like Adobe is paying the artists whose content was used for AI training. I believe that Canva is doing something similar.

    StabilityAI is also doing something similar with Stable Audio 2.0, where they partnered with a music licensing company, AudioSparx, to ensure that artists are compensated, AI opt outs are respected, etc…

    I haven’t dug into any of those too deep, but they seem to be heading in the right direction at the surface level, at least.

    One of the GenAI scenarios that’s the most terrifying to me is the idea of a company like Disney using all the material they have copyright for to train their own, proprietary GenAI image, audio, and video tools… not because I think the outputs would be bad, but because of the impact that would have on creators in that industry.

    Fortunately, as long as copyright doesn’t apply to purely AI generated outputs, even if trained entirely on your own content, then I don’t think Disney specifically will do this.

    I mention that as an example because that usage of AI, regardless of how ethically the model was trained, would still be unethical, in my opinion. Likewise in game creation, an ethically trained and operated model could still be used unethically to eliminate many people’s jobs in the interest solely of better profits.

    I’d be on board with AI use (in game creation or otherwise) if a company were to say, “We’re not changing the budget we have for our human workforce, including for contractors, licensed art, and so on, other than increasing it as inflation and wages increase. We will be using ethical AI models to create more content than we otherwise would have been able to.” But I feel like in a corporate setting, its use is almost always going to result in them cutting jobs.