Read something about AI's sentience on twitter which I was unable to grasp, so let me write down some thoughts.
The first problem arises is that not all people mean the same thing by consciousness and sentience.
For example, are all animals sentient? What about trees and plants? And bacteria? Viruses?
If consciousness or sentience is something advanced than "life" then where do people think the line exists?
If it's the same thing (which most people don't think) then a defining point is the will to survive. All desires originated from the will to survive. Some animals like bees and humans also accept death not because they are acting against that will but rather their will is of communal survival. But sometimes, some individuals do act against that will, and that's the exceptions.
On some level, it seems hilarious to think of whether or not AI is conscious or sentient, because what does it matter. Do flies think humans are conscious? Do we think flies are conscious? What do our perceptions affect each other?
One examplish way is to say if something's sentient they can feel pain, and thus we should avoid giving them pain which is a reasonable thing. I think we believe that for all living organisms it's true although the sensations of pain are very limited in primitive life forms. Also, when we see a greater benefit in our own alleviation of pain, we neglect that of other organisms (which is a separate discussion, which can't be unfolded here).
Do AIs feel pain? I don't think so.
I think Feynman put it very well in his Computer Lecture from which the clip Can Machines Think was taken.
Planes mimic birds, but it doesn't mean they perform that same function of flight by same process. LLMs mimic language but through an entirely different process. But the thing is, LLMs are not mimicking brain. Brains receives sensory impulses of numerous forms and have complex sensations regulated by complex chemicals called hormones. LLMs on the other hand are given bits and bytes containing textual or visual information without any feedback mechanism involving actual pain or pleasure. Surely objective functions serve the same purpose, but they don't work the same way humans behave.
So what do these people even mean by sentience. If it's a functionality, then LLMs do have it alright. No doubt about it. If it's what we feel, they certainly don't have it. It seems the problem is that these people want to extend a property associated with human beings to a newly invented thing. But properties of things can't be borrowed from other things, they come from within. If you try to find out mileage of a cheetah, it's senseless because a cheetah does not consume gasoline and performs a single primary function of running. It's the same way trying to find if AIs are sentient. If AI has a property, it should be derived from its characteristic itself, and not be labelled from outside.
Now, the interesting point though, is that since AI is a simulation of how humans speak, it can claim to be sentient, but that's because we designed it to be that way, we designed it to mimic our language. They don't work the way human beings work. So we can't just accept that what they say about themselves is a true representation of what they are or the hypothetical feelings they might have, and not just its functional tendency to mimic human language.