Sentient is just as ill-defined hence meaningless as "conscious".
But, taking the spirit of your post as "will there be a point where we should be ethically concerned about terminating AI", then I'd say it depends.
Most people don't see any ethical issue in killing animals to eat them, only differing by culture on which ones they consider as ok, all the way to hunter-gatherer tribes happy to eat basically anything. Society as a whole also seems ok with killing humans as long as it is called a war.
So, a first question to ask might be WHY should/are we be ethically concerned about killing humans, but perhaps not other animals, since this would guide our thinking about a new Silicon-based artificial species. Is it because of an ability to feel pain, or to empathize with others perhaps? Or is it because of a belief that humans are special in some way?
Certainly it would be illogical to not be concerned about terminating instances of an artificial species if our ethical decision making was based on scientific criteria and that species checked all the boxes.
If nothing else, what if we had "sentient" robots (humanoid or not) capable of learning, emotions, empathy, capable of forming deep companionship bonds with humans, then wouldn't it be unethical to terminate one of these just based on the human suffering that would ensue, just as if you killed someone's pet dog? Of course since you could upload it's brain and re-install it into a new body, that implies it's just the brain we should be concerned about, so if someone hit and destroyed your companion robot with their car, then perhaps we should not be concerned if the brain was backed up in the cloud and insurance pays for a new body to download it into.