We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Way to completely misrepresent what I was actually saying. Nowhere was I suggesting that there isn’t a huge difference between the two. What I pointed out is that, while undeniably more complex, our brains appear to work on similar principles.
My only point was that the feedback loop from embodiment creates the basis for volition, and that what we call intelligence is our ability to create internal models of the world that we use for decision making. So, this is likely a prerequisite for any artificial system that has any meaningful intelligence.
Maybe try engaging with that instead of writing a wall of text arguing with a straw man.
Sure in the same way that a horse and a motorcycle operate on similar principles and serve the same function.
Where the straw man? You’ve missed my point entirely. LLMs and the human mind operate on categorically different principles. All the verbiage used to describe neural network models has little to do with how the brain actually works. That’s honestly wasn’t a problem until Tech companies started purposely misusing those terms and now far too many people seem to think “AI” is something it’s not.
A bold statement given that we don’t actually understand how the brain operates exactly and what algorithms that would translate into.
The straw man is you continuing to argue against equating LLMs with the functioning of the brain, something I never said here.
You appear to be conflating the implementation details of how the brain works with the what it’s doing in a semantic sense. There is zero evidence that all the complexity of the brain is inherent to the way our reasoning functions. Again, we don’t have a full understanding of how the brain accomplishes tasks like reasoning. It may be a lot more complex than what LLMs do, or it may not be. We do not know.
Finally, none of this has anything to do with the point I was actually making which is regarding embodiment. You decided to ignore that to focus on braying about tech companies and LLMs instead.