It is not hand-waving; it is the difference between an LLM, which, again, has no cognizance, no agency, and no thought – and humans, which do. Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?
I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.
Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality.
Those two things can be true at the same time.
I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.
“Nothing alike” is kinda harsh, we do have about as much in common with ChatGPT as we have with flies purpose-bred to fly left or right when exposed to certain stimuli.
Define your terms. And explain why any of them matter for producing valid and “intelligent” responses to questions.
Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output?
Why are you so confident they aren’t? Do you believe in a soul or some other ephemeral entity that wouldn’t leave us as a biological machine?
People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?
Define your terms. And again, why is that a requirement for intelligence? Most of the things we do each day don’t involve conscious internal planning and reasoning. We simply act and if asked will generate justifications and reasoning after the fact.
It’s not that I’m claiming LLMs = humans, I’m saying you’re throwing out all these fuzzy concepts as if they’re essential features lacking in LLMs to explain their failures in some question answering as something other than just a data problem. Many people want to believe in human intellectual specialness, and more recently people are scared of losing their jobs to AI, so there’s always a kneejerk reaction to redefine intelligence whenever an animal or machine is discovered to have surpassed the previous threshold. Your thresholds are facets of the mind that you both don’t define, have no means to recognize (I assume your consciousness, but I cannot test it), and have not explained why they’re important for fact rather than BS generation.
How the brain works and what’s important for various capabilities is not a well understood subject, and many of these seemingly essential features are not really testable or comparable between people and sometimes just don’t exist in people, either due to brain damage or a simple quirk in their development. The people with these conditions (and a host of other psychological anomalies) seem to function just fine and would not be considered unthinking. They can certainly answer (and get wrong) questions.
analogize humans and LLMs when they truly are nothing alike.
They seem way more similar than different. The part were they are different trivially follow from the LLMs architecture (e.g. LLMs are static, tokenizing makes character-based problems difficult, memory is limited to the prompt, no interaction with the external world, no vision, no hearing, …) and most of that can be overcome by extending the model, e.g. multi-model models with vision and hearing are on their way, DeepMind is working on models that interact with the real world, etc. This is all coming and coming fast.
It is not hand-waving; it is the difference between an LLM, which, again, has no cognizance, no agency, and no thought – and humans, which do. Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?
I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.
Those two things can be true at the same time.
“Nothing alike” is kinda harsh, we do have about as much in common with ChatGPT as we have with flies purpose-bred to fly left or right when exposed to certain stimuli.
Define your terms. And explain why any of them matter for producing valid and “intelligent” responses to questions.
Why are you so confident they aren’t? Do you believe in a soul or some other ephemeral entity that wouldn’t leave us as a biological machine?
Define your terms. And again, why is that a requirement for intelligence? Most of the things we do each day don’t involve conscious internal planning and reasoning. We simply act and if asked will generate justifications and reasoning after the fact.
It’s not that I’m claiming LLMs = humans, I’m saying you’re throwing out all these fuzzy concepts as if they’re essential features lacking in LLMs to explain their failures in some question answering as something other than just a data problem. Many people want to believe in human intellectual specialness, and more recently people are scared of losing their jobs to AI, so there’s always a kneejerk reaction to redefine intelligence whenever an animal or machine is discovered to have surpassed the previous threshold. Your thresholds are facets of the mind that you both don’t define, have no means to recognize (I assume your consciousness, but I cannot test it), and have not explained why they’re important for fact rather than BS generation.
How the brain works and what’s important for various capabilities is not a well understood subject, and many of these seemingly essential features are not really testable or comparable between people and sometimes just don’t exist in people, either due to brain damage or a simple quirk in their development. The people with these conditions (and a host of other psychological anomalies) seem to function just fine and would not be considered unthinking. They can certainly answer (and get wrong) questions.
So do LLMs.
Ask it about any NSFW topic and it will refuse.
They seem way more similar than different. The part were they are different trivially follow from the LLMs architecture (e.g. LLMs are static, tokenizing makes character-based problems difficult, memory is limited to the prompt, no interaction with the external world, no vision, no hearing, …) and most of that can be overcome by extending the model, e.g. multi-model models with vision and hearing are on their way, DeepMind is working on models that interact with the real world, etc. This is all coming and coming fast.