• 0 Posts
  • 877 Comments
Joined 2 年前
cake
Cake day: 2023年10月6日

help-circle

  • I fully agree.

    Having Simon as the main character constantly had me feeling like one of the reveals would be that the scanning process was imperfect and somehow left him mentally damaged. Alas, it seems he was just naturally gifted with the emotional control and abstract reasoning abilities of a toddler.

    I get that it’s hard to explain a story in the internal monologue of a first-person character, so having them be oblivious is a great way to explain things to the player. But Soma felt likt it was actively insulting my intelligence by assuming I needed a drool-proof keyboard.












  • Workplace safety is quickly turning from a factual and risk-based field into a vibes-based field, and that’s a bad thing for 95% of real-world risks.

    To elaborate a bit: the current trend in safety is “Safety Culture”, meaning “Getting Betty to tell Alex that they should actually wear that helmet and not just carry it around”. And at that level, that’s a great thing. On-the-ground compliance is one of the hardest things to actually implement.

    But that training is taking the place of actual, risk-based training. It’s all well and good that you feel comfortable talking about safety, but if you don’t know what you’re talking about, you’re not actually making things more safe. This is also a form of training that’s completely useless at any level above the worksite. You can’t make management-level choices based on feeling comfortable, you need to actually know some stuff.

    I’ve run into numerous issues where people feel safe when they’re not, and feel at risk when they’re safe. Safety Culture is absolutely important, and feeling safe to talk about your problems is a good thing. But that should come AFTER being actually able to spot problems.



  • I’m a bit more pessimistic. I fear that that LLM-pushers calling their bullshit-generators “AI” is going to drag other applications with it. Because I’m pretty sure that when LLM’s all collapse in a heap of unprofitable e-waste and takes most of the stockmarket with it, the funding and capital for the rest of AI is going to die right along with LLMs.

    And there are lots of useful AI applications in every scientific field, data interpretation with AI is extremely useful, and I’m very afraid it’s going to suffer from OpenAI’s death.