• beeng@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    You’d think these centralised LLM search providers would be caching a lot of this stuff, eg perplexity or claude.

    • droplet6585@lemmy.ml
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      24 hours ago

      There’s two prongs to this

      1. Caching is an optimization strategy used by legitimate software engineers. AI dorks are anything but.

      2. Crippling information sources outside of service means information is more easily “found” inside the service.

      So if it was ever a bug, it’s now a feature.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        19 hours ago

        Third prong, looking constantly for new information. Yeah, most of these sites may be basically static, but it’s probably cheaper and easier to just constantly recrawl things.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      24 hours ago

      They’re absolutely not crawling it every time they nee to access the data. That’s an incredible waste of processing power on their end as well.

      In the case of code though that does change somewhat often. They’d still need to check if the code has been updated at the bare minimum.