• lily33@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Why is that a problem?

    For example, I’ve used it to learn the basics of Galois theory, and it worked pretty well.

    • The information is stored in the model, do it can tell me the basics
    • The interactive nature of taking to LLM actually helped me learn better than just reading.
    • And I know enough general math so I can tell the rare occasions (and they indeed were rare) when it makes things up.
    • Asking it questions can be better than searching Google, because Google needs exact keywords to find the answer, and the LLM can be more flexible (of course, neither will answer if the answer isn’t in the index/training data).

    So what if it doesn’t understand Galois theory - it could teach it to me well enough. Frankly if it did actually understand it, I’d be worried about slavery.