• Veraticus@lib.lgbtOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    They only use words in context, which is their problem. It doesn’t know what the words mean or what the context means; it’s glorified autocomplete.

    I guess it depends on what you mean by “information.” Since all of the words it uses are meaningless to it (it doesn’t understand anything of what it either is asked or says), I would say it has no information and knows nothing. At least, nothing more than a calculator knows when it returns 7 + 8 = 15. It doesn’t know what those numbers mean or what it represents; it’s simply returning the result of a computation.

    So too LLMs responding to language.

    • lily33@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Why is that a problem?

      For example, I’ve used it to learn the basics of Galois theory, and it worked pretty well.

      • The information is stored in the model, do it can tell me the basics
      • The interactive nature of taking to LLM actually helped me learn better than just reading.
      • And I know enough general math so I can tell the rare occasions (and they indeed were rare) when it makes things up.
      • Asking it questions can be better than searching Google, because Google needs exact keywords to find the answer, and the LLM can be more flexible (of course, neither will answer if the answer isn’t in the index/training data).

      So what if it doesn’t understand Galois theory - it could teach it to me well enough. Frankly if it did actually understand it, I’d be worried about slavery.