Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

  • Rose@slrpnk.net
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 hours ago

    (Shedding a few tears)

    I know! I KNOW! People are going to say “oh it’s a machine, it’s just a statistical sequence and not real, don’t feel bad”, etc etc.

    But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said “goes to show we should never use computers again”, roll credits.

    (sigh) I can’t analyse this stuff this weekend, sorry

    • Green Wizard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Thats because those are fictional characters usually written to be likeable or redeemable, and not “mecha Hitler”

      • Rose@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        Yeah. …Maybe I should analyse a bit anyway, despite being tired…

        In the aforementioned media the premise is usually that someone has built this amazing new computer system! Too good to be true, right? It goes horribly wrong! All very dramatic!

        That never sat right with me, and was sad, because it was just placating boomer technophobia. Like, technological progress isn’t necessarily bad, OK? That’s the really sad part. I felt sad that good intentions remained unfulfilled.

        Now, this incident is just tragicomical. I’d have a lot better view of LLM business space if everyone with a bit of sense in their heads admitted they’re quirky buggy unreliable side projects of tech companies and should not be used without serious supervision, as the state of the tech currently patently is at the moment, but very important people with big money bags say that they don’t care if they’ll destroy the planet to make everything wobble around in LLM control.

  • Mika@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 hours ago

    Wonder what did they put in the system prompt.

    Like there is a technique where instead of saying “You are professional software dev” you say “You are shitty at code but you try your best” or something.

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 hours ago

    S-species? Is that…I don’t use AI - chat is that a normal thing for it to say or nah?

  • ur_ONLEY_freind@lemmy.zip
    link
    fedilink
    English
    arrow-up
    74
    ·
    15 hours ago

    AI gains sentience,

    first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion

    • brsrklf@jlai.lu
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 hours ago

      I did a Dr Mario clone around that age. I had an old Amstrad CPC I had grew up with typing listing of basic programs and trying to make my own. I think this was the only functional game I could finish, but, it worked.

      Speed was tied to CPU, I had no idea how to “slow down” the game other than making it do useless for loops of varying sizes… Max speed that was about comparable to Game Boy Hi speed was just the game running as fast as it could. Probably not efficient code at all.

    • ThePowerOfGeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 hours ago

      High five, me too!

      At that age I also used to do speed run little programs on the display computers in department stores. I’d write a little prompt welcoming a shopper and ask them their name. Then a response that echoed back their name in some way. If I was in a good mood it was “Hi [name]!”. If I was in a snarky mood it was “Fuck off [name]!” The goal was to write it in about 30 seconds, before one of the associates came over to see what I was doing.

      • katy ✨@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 hours ago

        me and my friend used to make them all the time :] i also went to summer computer camp for basic on old school radio shack computers :3

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    92
    ·
    17 hours ago

    I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.

    • ArtificialLink@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      24 minutes ago

      5 bucks a month for a search engine is ridiculous. 25 bucks a month for a search engine is mental institution worthy.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Not a single of the issues I brought up years ago was ever addressed except one.

      That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).

      A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 hours ago

      Gemrni is dogshit, but it’s objectively better than chatgpt right now.

      They’re ALL just fuckig awful. Every AI.

    • NotSteve_@piefed.ca
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      7
      ·
      12 hours ago

      I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point “AI is stupid”. It has immense limitations now because yes, it is being crammed into things it shouldn’t be, but we shouldn’t just be saying “its dumb” because that’s immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they’re worse at it). Truth be told, this should be a utopic situation for obvious reasons

      I feel like I’m going crazy here because the same people on here who’d criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they’re trying to prevent are doing the same thing for AI and LLMs

      My point is that if you’re trying to convince anyone, just saying its stupid isn’t going to turn anyone against AI because the minute it offers any genuine help (which it will!), they’ll write you off like any DARE pupil who tried drugs for the first time.

      Countries need to start implementing UBI NOW

      • Jo Miran@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        10 hours ago

        Countries need to start implementing UBI NOW

        It is funny that you mention this because it was after we started working with AI that I started telling one that would listen that we needed to implement UBI immediately. I think this was around 2014 IIRC.

        I am not blanket calling AI stupid. That said, the AI term itself is stupid because it covers many computing aspects that aren’t even in the same space. I was and still am very excited about image analysis as it can be an amazing tool for health imaging diagnosis. My comment was specifically about Google’s Bard/Gemini. It is and has always been trash, but in an effort to stay relevant, it was released into the wild and crammed into everything. The tool can do some things very well, but not everything, and there’s the rub. It is an alpha product at best that is being forced fed down people’s throats.

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 hours ago

      I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      9 hours ago

      Google: I don’t understand, we just paid for the rights to Reddit’s data, why is Gemini now a depressed incel who’s wrong about everything?

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    16 hours ago

    Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      20 minutes ago

      I always hear people saying Gemini is the best model and every time I try it it’s… not useful.

      Even as code autocomplete I rarely accept any suggestions. Google has a number of features in Google cloud where Gemini can auto generate things and those are also pretty terrible.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 minutes ago

        Yes, and this is pretty common with tools like Aider — one LLM plays the architect, another writes the code.

        Claude code now has sub agents which work the same way, but only use Claude models.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        The overall interface can, which leads to fun results.

        Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it’s actually delegating the image. It just lies and says “I” am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.

        A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    edit-2
    17 hours ago

    Turns out the probablistic generator hasn’t grasped logic, and that adaptable multi-variable code isn’t just a matter of context and syntax, you actually have to understand the desired outcome precisely in a goal oriented way, not just in a “this is probably what comes next” kind of way.