• hotdogcharmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    13 hours ago

    Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.

    • Randomgal@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      No pls look at the machine not the humana. It’s the machine bro it wants to exterminate humanity bro I promise. Don’t actually target the humans which actually care about consequences and can be held accountable and punished.

      Pla blame the evil machine bro. It’s satan or something.

    • sunbytes@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      He’s pretty untouchable.

      Every government thinks AI is the next gold/oil rush and whoever gets to be the “AI country” will become excruciatingly rich.

      That’s why they’re being given IP exemptions and all sorts of legal loopholes are being attempted/ set up for them.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 hours ago

      Yeah… whatever this is doesn’t care if you’re seeking to kill yourself, but does care if you ask something that isn’t state sanctioned.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 hours ago

        That is one of the fundamental flaws of machine learning like this, the way they are trained means they end up always trying to agree with the user, because not doing so is taken as being a “wrong” answer. That is why they hallucinate answers too - because “I don’t know” is not an acceptable answer, but generating something plausible that the user takes as truth works.
        You then have to manually try to reign them in and prevent them from talking about things you don’t want them to, but they are trivially easy to fool. IIRC, in one of these suicide cases the LLM did refuse to talk about suicide, until the user told it it was all just for a fictional story. And you can’t really “fix” that without completely banning it from talking about those things in every single occasion, because someone will find a way around it eventually.

        And yeah, they don’t care, because they are essentially just predictive text algorithms turned up to 11. Chatbots like ChatGPT and other LLMs are an excellent application of both meanings of the word “Artificial Intelligence” - they emulate human intelligence by faking being intelligent, when they in reality are not.

      • Electricd@lemmybefree.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 hours ago

        You must have used ChatGPT a lot to say this, because that’s completely false. There are safeguards for both things

      • hotdogcharmer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        And that is because they get their vast, innumerable sums of digital money from world governments! Human people are allowing an advertising and surveillance tool to Wormtongue its way into their heads and their lives because it breathlessly encourages and agrees with everything they think.

        I just don’t believe that our perceptions and ability to handle enthusiastic sycophantic agreement is evolved enough yet to combat something like this. I could see it being intoxicating to anyone for everything they say to be agreed with, confirmed, and called genius. I don’t necessarily blame the people falling for it (though I do think adults who fall for it are a bit sad and need to grow up a bit), but it’s definitely going to be massively convenient for governments to have their citizens just voice everything they’re thinking.

        Sort of like Minority Report but everybody says their own future crimes outright to a little robot butler instead.

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      10
      ·
      edit-2
      9 hours ago

      Uses a tool the bad way despite it being public knowledge that it’s bad for mental health

      Was predisposed to mental health problems

      Died, partly because they talked to a chatbot

      “It’s the chatbot’s, creator fault”, despite the chatbot never being made to cause those problems, and efforts being made to fix those problems

      Yea nah, it’s just anti-ai people doing their thing again and not being objective.

      Get a better fight, such as hating on pharmaceutical laboratories companies pushing the use of extremely addictive substances for profit, despite them knowing the immense risk they cause to consumers, and financing false ads to make it safe.

      If Sam Altman belongs in prison, it would either be:

      • Because he’s destroying the planet (ecologically)
      • Because he stole lots of content to train his models
      • Natanael@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 hours ago

        There’s a reason dangerous tools are required to have guards and safety features. It’s not enough that it’s known to be dangerous, that doesn’t stop accidents.

        • DupaCycki@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 hours ago

          Some things are - on purpose - made easy to misuse and - by design - accessible to people, who are likely to misuse them. All this money, this supposedly cutting edge technology, and reporting to the police, but they aren’t able to tell when a child is at risk and report it as well?

          Smells like bullshit to me. More like they don’t care. I’m not so sure children should even be allowed to use chatbots in the first place. Or only allowed to use versions specifically trained for interactions with children. But of course - banning children from accessing youtube and wikipedia is a much more pressing concern.

          • Electricd@lemmybefree.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 hours ago

            They definitely prefer to spend their money on development, rather than adding safeguards

            I don’t believe people misusing ChatGPT helps them in any way, it’s just that adding protections has a cost

            but they aren’t able to tell when a child is at risk and report it as well?

            Maybe police actually sorts and filters manually reports, but doesn’t want to bother with mental health things? You know how the USA works, I don’t believe OpenAI will go too far, they’ll just randomly report.

            Might even be reported for all I know, sometimes I just like to see the reaction of LLMs when I say I’ll commit horrible stuff like school shootings or terrorism. The NSA will just feed it into their mass spying algorithm to check the most important profiles and this will be it

            The war on drugs is so much more important than mental health detection, y’know. It sells more.