• BotCheese@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    And we’re nowhere near dome scalimg LLM’s

    I think we might be, I remember hearing openAI was training on so much literary data that they didn’t and couldn’t find enough for testing the model. Though I may be misrememberimg.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There are still plenty of videos to watch and games to play. We might be running short on books, but there are many other sources of information that aren’t accessible to LLMs at the moment.

      Also just because the training set contained most of the books, doesn’t mean the model itself was large enough to learn from all of them. The more detailed your questions get, the bigger the change it will get them wrong, even if that knowledge should have been in the training set. For example ChatGPT as walkthrough for games is pretty terrible, even so there should be more than enough walkthroughs in the training set to learn from, same for summarizing movies, it will do the most popular ones, but quickly fall apart with anything a little lesser known.

      There is of course also the possibility that using the LLM as knowledge store by itself is a bad idea. Humans use books for that, not their brain. So an LLM that is very good at looking things up in a library could answer a lot more without the enormous models size and training cost.

      Basically, there are still a ton of unexplored areas, even if we have collected all the digital books.