• JustEnoughDucks@feddit.nl
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 hours ago

    And then I get down voted for laughing when people say that they use AI for “general research” 🙄🙄🙄

  • Repple (she/her)@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    I searched for pictures of Uranus recently. Google gave me pictures of Jupiter and then the ai description on top chided me telling me that what was shown were pictures of Jupiter, not Uranus. 20 years ago it would have just worked.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 hours ago

    Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn’t know the exact site, so I searched for it on Google.

    Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google’s shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.

    This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      27
      ·
      20 hours ago

      While I do think that it’s simply bad at generating answers because that is all that’s going on, generating the most likely next word that works a lot of the time but then can fail spectacularly…

      What if we’ve created AI but by training it with internet content, we’re simply being trolled by the ultimate troll combination ever.

      • seaQueue@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        19 hours ago

        This is what happens when you train your magical AI on a decade+ of internet shitposting

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          18 hours ago

          They didn’t learn from all the previous times someone tried to train a bot on the internet.

          • pogmommy@lemmy.ml
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            4 hours ago

            It’s almost poetic how Tay.ai, Microsoft’s earlier shitty ai, was also poisoned by internet trolling and became a Nazi on twitter nearly a decade ago

  • TheGoldenGod@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    19 hours ago

    Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It’s even dumber than expecting you can have a child from anal sex.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    19 hours ago

    In the late 90s and early 2000s, internet search engines were designed to actually find relevant things … it’s what made Google famous

    Since the 2010s, internet search engines have all been about monetizing, optimizing, directing, misdirecting, manipulating searches in order to drive users to the highest paying companies or businesses, groups or individuals that best knew how to use Search Engine Optimization. For the past 20 years, we’ve created an internet based on how we can manipulate everyone and everything in order to make someone money. The internet is no longer designed to freely and openly share information … it’s now just a wasteland of misinformation, disinformation, nonsense and manipulation because we are all trying to make money off one another in some way.

    AI is just making all those problems bigger, faster and more chaotic. It’s trying to make money for someone but it doesn’t know how yet … but they sure are working on trying to figure it out.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      18 hours ago

      Not just the search engines, but the websites themselves as well. Gaming the search engines is now an entire profitable industry, not just people putting links to their friends’ websites at the bottom of their webpage, or making a webring.

      It’s just been a race to the bottom. The search engines get worse, as do the websites, and the whole thing is exacerbated by people today being able to churn out entire websites by the hundreds. Anyone trying to do things without playing the game simply ends up buried under layers of rubbish.

      • altkey@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        15 hours ago

        The Sages of the modern day are the lucky few who know which old and boring sites to ask for an answer.

  • regrub@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    ·
    22 hours ago

    Who could have seen this coming? Definitely not the critics of LLM hyperscalers.

  • lemmylommy@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    21 hours ago

    Well, that’s less bad than 100% SEO optimized garbage with LLM generated spam stories around a few Amazon links.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      That guy is a moron.

      But AI assistance in taxes is also being introduced where I live (Spain which is currently being government by a coalition of socialist parties).

      Still not deployed so I couldn’t say how it will work. But preliminary info seems promising. They are going to use a publicly trained AI project that has already being released.

      The thing is that I don’t think that precisely that is a Musk idea. It’s something that have been probably been talked about various tax agencies in the world in the latest years. The probably is just parroting the idea and giving them project to one of his billionaire friends.

  • Cosmic Cleric@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    From the article…

    Surprisingly, premium paid versions of these AI search tools fared even worse in certain respects. Perplexity Pro ($20/month) and Grok 3’s premium service ($40/month) confidently delivered incorrect responses more often than their free counterparts.

    Though these premium models correctly answered a higher number of prompts, their reluctance to decline uncertain responses drove higher overall error rates.

  • cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    edit-2
    17 hours ago

    It’s strongly dependent on how you use it. Personally, I started out as a skeptic but by now I’m quite won over by LLM-aided search. For example, I was recently looking for an academic that had published some result I could describe in rough terms, but whose name and affiliation I was drawing a blank on. Several regular web searches yielded nothing, but Deepseek’s web search gave the result first try.

    (Though, Google’s own AI search is strangely bad compared to others, so I don’t use that.)

    The flip side is that for a lot of routine info that I previously used Google to find, like getting a quick and basic recipe for apple pie crust, the normal search results are now enshittified by ad-optimized slop. So in many cases I find it better to use a non-web-search LLM instead. If it matters, I always have the option of verifying the LLM’s output with a manual search.