LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • RDSM@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    “Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.“

    Have they tested actual SOTA models?

    • Hawk@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I don’t think I would have made too much of a difference because the state-of-the-art models still aren’t a database.

      Maybe more recent models could store more information in a smaller number of parameters, but it’s probably going to come down to the size of the model.

      The Only exception there is if there is indeed some pattern in modern history that the model is able to learn, but I really doubt that.

      What this article really calls to light is that people tend to use these models for things that they’re not good at because it’s being marketed contrary to what it is.