• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    I know about the Turing test, it’s what we were taught about and debated in philosophy class at University of Copenhagen, when I made my prediction that strong AI would probably be possible about year 2035.

    to exhibit intelligent behaviour equivalent to that of a human

    Here equivalent actually means indistinguishable from a human.

    But as a test of consciousness that is not a fair test, because obviously a consciousness can be different from a human, and our understanding of how a simulation can fake something without it being real is also a factor.
    But the original question remains, how do we decide it’s not conscious if it responds as if it is?

    This connects consciousness to reasoning ability in some unclear way.

    Maybe it’s unclear because you haven’t pondered the connection? Our consciousness is a very big part of our reasoning, consciousness is definitely guiding our reasoning. And our consciousness improve the level of reasoning we are capable of.
    I don’t see why the example requiring training for humans to understand is unfortunate. A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.

    It’s hard to explain, but intuitively it seems to me the missing factor is consciousness. It has learned tons of information by heart, but it doesn’t really understand any of it, because it isn’t conscious.

    Being conscious is not just to know what the words mean, but to understand what they mean.
    I think therefore I am.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      58 minutes ago

      I don’t see why the example requiring training for humans to understand is unfortunate.

      Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.

      It’s not clear why such deficiencies among humans do not argue against human consciousness.

      A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.

      That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.

      Being conscious is not just to know what the words mean, but to understand what they mean.

      What might an operational definition look like?

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        58 seconds ago

        Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.