Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

  • FlapKap@feddit.dk
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I like the point about LLMs interpolating data while humans extrapolate. I think that’s sums up a key difference in “learning”. It’s also an interesting point that we anthropomorphise ML models by using words such as learning or training, but I wonder if there are other better words to use. Fitting?