Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • melpomenesclevage@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 hour ago

    unfortunately, no. when the concept of machine intelligence was first being explored, marvin minsky(I think)'s secretary used ELIZA, the basic fits-on-a-page chatbot. they said it was absolutely a person, that they were friends with it. he walked them through it, explained the code (which, again, fits on one page in a modern language. a couple punch cards back then, you can look at what looked at first glance like a faithful python port here). the secretary just would not believe him, INSISTED that it was a person, that it cared about them.

    this was someone working around the cutting edge of the field, and being personally educated by one of those big ‘great man’ type scientists-and not one of the egotistical shithead ones who’d have been a garbage teacher.