• HakFoo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    I’d suspect the low “density” of context makes it prone to hallucinations. You need to load in 3000 lines to express what Python does in 3, so there’s a lot of chances to guess the next token wtong.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      I was gonna say that, probably the higher the abstraction level the best it is for LLMs to reason about the code, because once learned it’s less tokens.